MIT LIBRARIES
3 9080 01917 6699
•lilii
^--:iliiiiii;
illUff
■itiiiPS
iliiii/ilitiiiiiiiijijljsiijiiiliiiiilil^^
inj|il|iiiii!liij'j%i:;i|i;iii.iii;:H
liilll ijiiiif.
!ii;!lii'ii!Si!!Jili;li;'mii;ii!!f:ii!iii;
1 ill; IsuiJ! lil il ii'
iliiiliililiw'^' -
iiiiiliip
'""il
iili i M^
liiilM^
m
iiliiiiiiii|iliifv'.
■ I '1 I ! I
iliiiMiiiiii
iliiiiiiiiiiir;--:;^^
'■ "^''"Ui^Jjijri^Sif?;;.
I'M
Digitized by the Internet Archive
in 2011 with funding from
Boston Library Consortium Member Libraries
http://www.archive.org/details/slowdownofeconomOOelli
; DEWEV
Massachusetts Institute of Technology
Department of Economics
Working Paper Series
THE SLOWDOWN OF THE ECONOMICS
PUBLISHING PROCESS
Glenn Ellison, MIT Dept of Economics
Working Paper 00-12
July 2000
Room E52-251
50 Memorial Drive
Cambridge, MA 02142
This paper can be downloaded without charge from the
Social Science Research Network Paper Collection at
http: //papers, ssrn. com/paper. taf?abstract_ld=XXXXXX
MASSACHUSETTS INSTITUTE
OF TECHNOLOGY
LI8F?ARIES
Massachusetts Institute of Technology
Department of Economics
Working Paper Series
THE SLOWDOWN OF THE ECONOMICS
PUBLISHING PROCESS
Glenn Ellison, MIT Dept of Economics
Working Paper 00-12
July 2000
Room E52-251
50 Memorial Drive
Cambridge, MA 02142
This paper can be downloaded without charge from the
Social Science Research Network Paper Collection at
http : //papers . ssrn . com/paper . taf?abstract_id=XXXXXX
The Slowdown of the Economics Pubhshing Process
Glenn Ellison^
Massachusetts Institute of Technology and NBER
June 2000
'I would like to thank the National Science Foundation (SBR-9818534), the Sloan Foundation,
the Center for Advanced Study in the Behavioral Sciences, and the Paul E. Gray UROP Fund for
their support. This paper would not have been possible without the help of a great many people. I
am very grateful for the efforts that a number of journals made to supply me with data. In addition,
many of the ideas in this paper were developed in the course of a series of conversations with other
economists. I would especially like to thank Orley Ashenfelter, Susan Athey, Robert Barro, Gary
Becker, John Cochrane, Olivier Blanchard, Judy Chevalier, Ken Corts, Bryan Ellickson, Sara Fisher
Ellison, Frank Fisher, Drew Fudenberg, Joshua Gans, Edward Glaeser, Daniel Hamermesh, Lars
Hansen, Harriet Hoffman, Jim Hosek, Alan Krueger, Paula Larich, Vicky Longawa, Robert Lucas,
Wally Mullin, Paul Samuelson, Ilya Segal, Karl Shell, Andrei Shleifer and Kathy Simkanich without
implicating them for any of the views discussed herein. Richard Crump, Simona Jelescu, Christine
Kiang, Nada Mora and Caroline Smith provided valuable research assistance.
The Slowdown of the Economics Pubhshing Process
Glenn Ellison-^
Massachusetts Institute of Technology and NBER
June 2000
^I would like to thank the National Science Foundation (SBR-9818534), the Sloan Foundation,
the Center for Advanced Study in the Behavioral Sciences, and the Paul E. Gray UROP Fund for
their support. This paper would not have been possible without the help of a great many people. I
am very grateful for the efforts that a number of journals made to supply me with data. In addition,
many of the ideas in this paper were developed in the course of a series of conversations with other
economists. I would especially like to thank Orley Ashenfelter, Susan Athey, Robert Barro, Gary
Becker, John Cochrane, Olivier Blanchard, Judy Chevalier, Ken Corts, Bryan Ellickson, Sara Fisher
Ellison, Frank Fisher, Drew Fudenberg, Joshua Gans, Edward Glaeser, Daniel Hamermesh, Lars
Hansen, Harriet HofTman, Jim Hosek, Alan Krueger, Paula Larich, Vicky Longawa, Robert Lucas,
Wally Mullin, Paul Samuelson, Ilya Segal. Karl Shell, Andrei Shleifer and Kathy Simkanich without
implicating them for any of the views discussed herein. Richard Crump, Simona Jelescu, Christine
Kiang, Nada Mora and Caroline Smith provided valuable research assistance.
Abstract
Over the last three decades there has been a dramatic increase in the length of time nec-
essary to publish a paper in a top economics journal. This paper documents the slowdown
and notes that a substantial part is due to an increasing tendency of journals to require that
papers be extensively revised prior to acceptance. A variety of potential explanations for
the slowdown are considered: simple cost and benefit arguments; a democratization of the
publishing process; increases in the complexity of papers; the growth of the profession; and
an evolution of preferences for diiferent aspects of paper quality. Various time series are
examined for evidence that the economics profession has changed along these dimensions.
Paper-level data on review times is used to assess connections between underlying changes
in the profession and changes in the review process. It is difficult to attribute much of the
slowdown to observable changes in the economics profession. Evolving social norms may
play a role.
JEL Classification No.: A14
Glenn Ellison
Department of Economics
Massachusetts Institute of Technology
50 Memorial Drive
Cambridge, MA 02142-1347
gellison@mit.edu
1 Introduction
Thirty or forty years ago papers in the top economics journals were typically accepted
within six to nine months of their submission. Today it is much more common for journals
to ask that papers be extensively revised, and on average the cycle of reviews and revisions
consumes about two years. The change in the publication process affects the economics
profession in a number of ways — it affects the timeliness of journals, the readability and
completeness of papers, the evaluation of junior faculty, etc. Probably most importantly,
the review process is the major determinant of how economists divide their time between
working on new projects, revising old papers and reviewing the work of others. It thus
has a substantial impact both on the aggregate productivity of the profession and on how
enjoyable it is to be an economist.
This paper has two main goals: to document how the economics publishing process has
changed; and to improve understanding of why it has changed. On the first question I find
that the slowdown is widespread. It has affected most general interest and field journals.
Part of the slowdown is due to slower refereeing and editing, but the largest portion reflects
a tendency of journals to require more and larger revisions. My main observation on the
second question is that it is hard to attribute most of the slowdown to observable changes
in the profession. I view a large part of the change as due to a shift in arbitrary social
norms.
While the review process at economics journals has lengthened dramatically, the change
has occurred gradually. Perhaps as a result it does not seem to have been widely recognized
(even by journal editors). In Section 2 I provide a detailed description of how review times
have grown and where in the process the changes are occurring. What may be most striking
to young economists is to see that in the early 1970's most papers got through the entire
process of reviews and revisions in well under a year. In earlier years, in fact, almost
all initial submissions were either accepted or rejected — the noncommittal "revise-and-
resubmit" option was used only in a few exceptional cases.
In the course of conversations with journal editors and other economists many potential
explanations for the slowdown have been suggested to me. I analyze four sets of explanations
in Sections 3 through 6. Each of these sections has roughly the same outline. First, I
describe a set of related explanations, e.g. 'A common impression is that over the last 30
years change X has occurred in the profession. For the following reasons this would be
expected to lead to a more drawn out review process . . . ' Then, I use whatever time series
evidence I can to examine whether change X has actually occurred and to get some idea
of the magnitude of the change. Finally, I look cross-sectionally at how review times vary
from paper to paper for evidence of the hypothesized connections between X and review
times. In these tests, I exploit a dataset which contains review times, paper characteristics
and author characteristics for over 5000 papers. The data include at least some papers from
all of the top general interest journals and contain nearly all post-1970 papers at some of
the journals.
Section 3 is concerned with the most direct arguments — arguments that the extent to
which papers are revised has gone up because the cost of revising papers has gone down and
the social benefit of revising papers has gone up. Specifically, one would imagine that the
costs of revisions have gone down because of improvements in computer software and that
the benefits of revisions have gone up because the information dissemination role of journals
has become less important. Most of my evidence on this explanation is anecdotal. I view
the explanation as hard to support, with perhaps the most important piece of evidence
being that the slowdown does not seem to have been intentional.
In the explanations discussed in Section 4, the exogenous change is the "democratiza-
tion" of the publishing process, i.e. a shift from an "old boys network" to a more merit-based
system. This might lengthen review times for a number of reasons: papers need to be read
more carefully; mean review times go up as privileged authors lose their privileges; etc.
Here I can be more quantitative and find that there is little or no support for the potential
explanations in the data. Time series data on the author-level and school-level concentra-
tion of publication suggest that there has not been a significant democratization over the
last thirty years. I find no evidence of prestige benefits or other predicted effects in the
cross-sectional data.
In Section 5 the exogenous change is an increase in the complexity of economics papers.
This might lengthen review times for a number of reasons: referees and editors will find
papers harder to read; authors will have a harder time mastering their own work; authors
will be less able to get advice from colleagues prior to submission, etc. I do find that papers
have grown substantially longer over time and that longer papers take longer in the review
process.^ Beyond this moderate effect, however, I find complexity-based explanations hard
to support. If papers were more complex relative to economists' understanding I would
expect that economists to have become more specialized. Looking at the publication records
of economists with multiple papers in top journals, I do not see a trend toward increased
'Laband and Wells (1998) discuss changes in page lengths over a longer time horizon.
specialization. In the cross-section I also find little evidence of the hypothesized links
between complexity and delays. For example, papers do not get through the process more
quickly when they are assigned to an editor with more expertise.
In Section 6 the growth in the economics profession is the exogenous change. There are
two main channels through which growth might slow the review process at top journals:
it may increase the workload of editors and it may increase competition for the limited
number of slots in top journals. Explanations based on increased editorial workloads are
hard to support — at many top economics journals there has not been a substantial increase
in submissions for a long time. While the growth in the economics profession since 1970 has
been moderate (Siegfried, 1998), the competition story is more compelling. Journal citation
data indicates that the best general interest journals are gaining stature relative to other
journals. Some top journals are also publishing many fewer papers. Hence, there probably
has been a substantial increase in competition for space in the top journals. Looking at a
panel of journals, I find some evidence that journals tend to slow down more as they move
up in the journal hierarchy. This effect may account for about three months of the observed
slowdown at the top journals.
My main conclusion from Sections 3 through 6, however, is that it is hard to attribute
most of the slowdown to observable changes in the profession. The lengthening of papers
seems to be part of the explanation. An increase in the relative standing of the top journals
is probably another. Journals may have less of a sense of urgency now because of the wider
dissemination of working papers. Looking at all the data, however, my strongest impression
is that the economics profession today looks sufficiently like the economics profession in 1970
to make it hard to argue that the review process must be so different. Instead, I hypothesize
that much fo the change may reflect a shift in the social norms that dictate what papers
should look like and how they should be reviewed.
The argument described above gives social norms a privileged status in that the case
for it made by showing that there is a the lack of evidence for other explanations.^ It also
provides an incomplete answer to the question of why the review process has lengthened,
because it does not tell us why social norms have shifted. Ellison (2000) provides one poten-
tial explanation for why social norms might shift in the direction of emphasizing revisions.^
^In some ways this can be thought of as similar to the way in which papers without any data on
technologies have attributed changes in the wage structure to "skill-biased technological change," and the
way in which unexplained differences in male-female or black-white wages are sometimes attributed to
discrimination.
^The model also attempts to provide a parsimonious explanation for other observed changes in papers,
such as the tendency to be longer, have a longer introduction and more references.
Papers are modeled as differing along two quality dimensions, q and r. The q dimension
is interpreted as representing the clarity and importance of the paper's main contribution
and r-quality is interpreted as reflecting the other dimensions of quality that are often the
focus of revisions, e.g. exposition, extensions and robustness checks."* The relative weight
that the profession places on q and r is an arbitrary social norm. Economists learn about
the social norm over time from their experiences as authors and referees. Whenever referees
try to hold authors to an unreasonably high standard the model predicts that social norms
will evolve in the direction of placing more emphasis on r. A long gradual evolution in this
direction can be generated by assuming that economists have a slight bias (that they do
not recognize) that makes them think that their own work is better than it is. Section 7
reviews this model and examines a couple of its implications empirically.
There is a substantial literature on economics publishing. I draw on and update its
findings at several points.^ Four papers that I am aware of have previously discussed
submit-accept times: Coe and Weinstock (1967), Yohe (1980), Laband et al (1990) and
Trivedi (1993). All of these papers after the first make some note of increasing delays:
Yohe notes that the lags in his data are longer than those reported by Coe and Weinstock;
Laband et al examine papers published in REStat between 1976 and 1980 and find evidence
of a slowdown within this sample; Trivedi examines lags for econometrics papers published
in seven journals between 1986 and 1990 and notes both that there is a trend within his
data and that lags are longer in his data than in Yohe's. Laband et al (1990) also examine
some of the determinants, of review times in a cross-section regression.
2 The slowdown
In this section I present some data to expand on the main observation of the paper —
that there has been a gradual but dramatic increase in the amount of time between the
submission of papers and their eventual acceptance at top economics journals . A large
portion of this slowdown appears to be attributable to a tendency of journals to require
more (and larger) revisions.
■* Another interpretation is that q could reflect the authors contributions and r the quahty of the improve-
ments that are suggested by the referees.
I make particular use of data reported in Laband and Piette (1994b), Siegfried (1994), and Yohe (1980).
Hudson (1996), Laband and Wells (1998) and Siegfried (1994) provide related discussions of long-run trends
in the profession. See Colander (1989) and Cans (2000) for overviews of the literature on economics pub-
lishing.
2.1 Increases in submit-accept times
Figure 1 graphs the mean length of time between the dates when articles were initially
submitted to several journals and the dates when they were finally accepted (including time
authors spent making required revisions) for papers published between 1970 and 1999.^
The data cover six general interest journals: American Economic Review {AER), Econo-
metrica, Journal of Political Economy {JPE), Quarterly Journal of Economics {QJE), Re-
view of Economic Studies [REStud), and the Review of Economics and Statistics {REStat).
The first five of these are among the six most widely cited journals today (on a per article
basis) and I take them to be the most prestigious economics journals/ I include the sixth
because it was comparably prominent in the early part of the period.
While most of the year-to-year changes are fairly small, the magnitude of the increase
when aggregated up over the thirty-year period is startling. At Econometrica and the
Review of Economic Studies we see review times lengthening from 6-12 months in the early
seventies to 24-30 months in the late nineties. My data on the AER and JPE do not go
back nearly as far, but I can still see submit-accept times more than double (since 1979
at the JPE and since 1986 at the AER). The AER data include three outliers. From 1982
to 1984 Robert Glower ran the journal in a manner that must have been substantially
different from the process before or since; I do not regard these years as part of the trend
to be explained.^ The QJE is the one exception to the trend. Its review times followed a
^The data for Econometrica do not include the time between the receipt of the final revision of a paper
and its final acceptance. The same is true of the data on the Review of Economic Studies for 1970-1974.
Where possible, I include only papers pubhshed as articles and not shorter papers, notes, comments, replies,
errata, etc. The AER and JPE series are taken from annual reports, and presumably include all papers.
For 1993 - 1997 I also have paper-level data for these journals and can estimate that in those the mean
submit-accept times given in the AER and JPE annual reports are 2.2 and 0.6 montlis shorter than the
figures I would have computed from the paper-level data. The AER data do not include the Papers and
Proceedings issues. The means for other journals were tabulated from data at the level of the individual
papers. For many of the journal- years tables of contents and papers were inspected individually to determine
the article-nonarticle distinction. In other years, rules of thumb involving page lengths and title keywords
were used.
''The ratio of total citations in 1998 to pubhcations in 1998 for the five journals are: Econometrica 185;
JPE 159; QJE 99; REStud 65; and AER 56. The AER is hurt in this measure by the inclusion of the
papers in the Papers and Proceedings issue. Without them, the AER's citation ratio would probably be
approximately equal to the QJEs. The one widely cited journal I omit is the Journal of Economic Literature
(which has a citation ratio of 67) because of the different nature of its articles.
^Note the one earlier datapoint from the AER: a mean time of 13.5 months in 1979. To those who may
be puzzling over the figure I would like to confirm that Glower reported in his 1982 editor's report that
for the previous three issues his mean submit-accept time was less than two months and his mean time to
rejection for rejected papers was 25 days. This seems quite remarkable before the advent of e-mail and fax
machines, especially given that in 1983 Glower reports receiving help from 550 referees. Glower indicates
that he received a great deal of positive feedback from authors, but also enough hate mail that he felt
obliged to share his favorite ("should you learn the date in advance I should be pleased to be present at
Total Review Time at General Interest Journals: 1970 - 1999
Year
• Econometrica
■ Review of Economic Studies
-Journal of Political Economy
2000
-B— American Economic Review
-ft— Review of Economics and Statistics
-©—Quarterly Journal of Economics
Figure 1: Changes in total review times at top general interst journals
The figure graphs the mean length of time between submission and acceptance for
papers published in six general interst journals between 1970 and 1999. The data for
Econometrica and the pre- 1975 data for Review of Economic Studies do not include the
length of time between the resubmission of the final version of a paper and acceptance.
Data for the AER and JPE include all papers and are taken from annual editors reports.
Data for the other journals is tabulated from records on individual papers and omits
shorter papers, notes, comments, replies, etc.
similar pattern up through 1990, but with the change of the editorial staff in 1991 there
was a clear break in the trend and mean total review times have now dropped to about
a year. I will discuss below the ways in which the QJE is and is not an exception to the
pattern of the other journals.
The slowdown of the publishing process illustrated above is not restricted to the top
general interest journals. Similar patterns are found throughout the field journals and in
finance. Table 1 reports mean total review times for various journals in 1970, 1980, 1990
and 1999.^ Ellison (2000) provides a broader overview of where the pattern is and is not
found in other disciplines in the social, natural and mathematical sciences. ^°
In the discussion above, I've focused on changes in mean submit-accept times. When
one looks at the distribution of submit-accept times, the uniformity of the slowdown can
be striking. Figure 2 provides one (admittedly extreme) example. The figure presents
histograms of the submit-accept times for papers published in the Review of Economic
Studies in 1975 and 1995. In 1975 the modal experience was to have a paper accepted in
four to six months and seventy percent of the papers were accepted within a year. In 1995
almost nothing was accepted quickly. Only three of the twenty eight papers were accepted
in less than sixteen months. The majority of the papers are in the sixteen to thirty two
month range, and there is also a substantial set of papers taking from three to five years.
2.2 Where is the increase occurring?
A common first reaction to seeing the figures on the slowdown of submit-accept times is
to imagine that the story is one of a breakdown of norms for timely refereeing. Everyone
has heard horror stories about slow responses and it is easy to imagine papers just sitting
for longer and longer periods in piles on referees' desks waiting to be read. Upon further
reflection, it is obvious that this cannot be the whole story — the increases in submit-accept
times are too large to be due to a single round of slow refereeing. ^^
Figure 3 suggests that, in fact, slow refereeing is just a small part of the story. The
figure illustrates how the mean time between submission and the sending of an initial
decision letter has changed over time at four of the top five general interest journals. ^^ At
your hanging") in his first editor's report.
^The definition of total review time and the years used varies across journals as explained in the table
notes.
'"Ellison (2000) also gives a cross-field view of the trend toward writing longer papers with more references.
''See Hamermesh (1994) for a discussion of the distribution of refereeing times at several journals.
'^The set of papers included in the calculation varies somewhat from journal to journal so the figures
should not be compared across journals. Details are given in the notes to the figure.
Table 1: Changes in review times at various journals
Journal
Mean total review
time in
year
1970
1980
1990
1999
Top five general interest journals
American Economic Review
n3.5
12.7
Econometrica
^8.8
Ha.o
"22.9
"26.3
Journal of Political Econow.y
9.5
13.3
20.3
Quarterly Journal of Economics
8.1
12.7
22.0
13.0
Review of Economic Studies
Ho.g
21.5
21.2
28.8
Other general interest journ
als
Canadian Journal of Economics
°11.3
16.6
Economic Inquiry
"3.4
13.0
Economic Journal
°9.5
"18.2
International Economic Review
''7.8
Hl.Q
"15.9
"16.8
Review of Economics and Statistics
8.1
11.4
13.1
18.8
Economics fi
eld journals
Journal of Applied Econometrics
"16.3
"21.5
Journal of Comparative Economics
^0.3
"10.9
"10.1
Journal of Development Economics
^"^5.6
''6.4
"12.6
"17.3
Journal of Econometrics
''9.7
"17.6
"25.5
Journal of Economic Theory
''0.6
''6,1
"17.0
"16.4
Journal of Environmental Ec. & Man.
''5.5
"6.6
"13.1
Journal of International Economics
"8.7
16.2
Journal of Law and Economics
"6.6
14.8
Journal of Mathematical Economics
bc22
''7.5
17.5
8.5
Journal of Monetary Economics
"11.7
"16.0
Journal of Public Economics
^"^2.6
''12.5
"14.2
"9.9
Journal of Urban Economics
"5.4
"10.3
"8.8
RAND Journal of Economics
"7.2
20.0
20.9
Journals in r
elated fields
Accounting Review
10.1
20.7
14.5
Journal of Accounting and Economics
''11.4
"12.5
"11.5
Journal of Finance
"6.5
18.6
Journal of Financial Economics
'"^2.6
"7.5
"12.4
"14.8
The table records the mean time between initial submission and acceptance for articles
published in various journals in various years. Notes: a - Data from Yohe (1980) is for
1979 and probably does not include the review time for the final resubmission, b - Does
not include review time for final resubmission, c - Data for 1974. d - Data for 1972.
Distribution of Submit-Accept Times
Review of Economic Studies
1 975 & 1995
Months
EO 1 975 Q 1 995
t?> ^ ^ <^
Figure 2: The distribution of submit-accept times at the Review of Economic Studies:
1975 and 1995
The figure contains a histogram of the time between submission and acceptance for
articles pubhshed in the Review of Economic Studies in 1975 and 1995. One 1995
observation at 84 months was omitted to facilitate the scaling of the figure.
Econometrica, the mean first response time in the late nineties is virtually identical to what
it was in the late seventies. At the JPE the latest figure is about two months longer than
the earliest; this is about twenty percent of the increase in review times between 1982 and
1999. The AER shows about a one-and-a-half month increase since 1986; this is about 15
percent as large as the increase in submit-accept times over the same period. ■^'^ A discussion
of what may in turn have caused first responses to slow down must take into account that
the time a referee spends working on a report is small relative to the amount of time the
paper sits on his or her desk. I would imagine that the biggest causes of changes in first
reponse times are changes in the total demands on referees and changes in social norms
about acceptable delays. To the extent that referees wait until they have a sufficiently large
block of time free to complete a report before starting the task, some part of the slowdown
in first reponses could also be due to increases in the complexity of papers and/or the
increases in how substantial a referees' suggestions for improvement are expected to be.
The pattern at the QJE is diflFerent from the others. The QJE experienced a dramatic
slowdown of first responses between 1970 and 1990, followed by an even more dramatic speed
up in the 1990's.-'^ It is this difference (and reviewing many revisions quickly without using
referees) that accounts for the QJEs unique pattern of submit-accept times.
Assuming that the data on mean first response times are also representative of what has
happened at other journals and in earlier time periods, the majority of the overall increase
in submit-accept times must be attributable to one or more of four factors: an increase in
the number of times papers are being revised; an increase in the length of time authors take
to make revisions; an increase in the mean review time for resubmissions; and a growing
disparity between mean review times and mean review times for accepted papers. I now
discuss each of these factors.
Evidence from a variety of sources indicates that papers are now revised much more
often and more extensively than they once were. First, while older economists I interviewed
uniformly indicated that journals have required revisions for as long they could remember,
they also indicated that having papers accepted without revisions was not uncommon, that
revisions often focused just on expositional (or even grammatical) points, and that requests
'■^Again, the figures from the Clower era are almost surely not representative of what happened earlier
and are probably best ignored.
'''Larry Katz has turned in the most impressive performance. His mean first response time is 39 days,
and none of the 1494 papers I observe him handling took longer than six months and one week. I have not
included estimates of mean first response times for the QJE between 1980 and 1990 because the increasing
slowdown of the late eighties was accompanied by recordkeeping that was increasingly hard to follow. Table
4 provides a related measurement that gives some feel for the severe delays of the late eighties.
10
Mean First Response Time
1965 1970 1975 1980 1985 1990 1995 2000
Year
—B— American Economic Review —*— Journal of Political Economy
— ♦— Econometrica — ©— QJE
Figure 3: Changes in first response times at top journals
The figure graphs the mean length of time between submission of a manuscript to
each of four general interest journals and the journal reaching an initial decision. The
Econometrica data is an estimate of the mean first response time for all submissions
(combining new submissions and resubmissions) derived from data in the editors' reports
on papers pending at the end of the year under the assumptions that papers arrive
uniformly throughout the year and no paper takes longer than twelve months. The data
for year t is the mean first response time for submissions arriving at Econometrica between
July 1st of year t — 1 and June 30th of year t. Figures for the AER are estimated from
histograms of response times in the annual editor's reports and relate to papers arriving
in the same fiscal year as for Econometrica. Figures for the JPE are obtained from journal
annual reports. They appear to be the mean first response time for papers that are
rejected on the initial submission in the indicated year. The 1970 and 1980 QJE numbers
are the mean first response time for a random sample of papers with first responses in the
indicated year. Figures for the QJEior 1994 to 1997 are the mean for all papers with first
responses in the indicated year.
11
for substantial changes were sometimes regarded as unreasonable unless particular problems
with the paper had been identified. ^^
Second, I obtained quantitative evidence on the growth of revisions by reading through
old index card records kept by the QJE}^ The first row of Table 2 extends the timespan of
our view of the slowdown, and indicates that at the QJE the slowdown begins around 1960
following a couple decades of constant review times. -^^ The second row of Table 2 illustrates
that (despite the QJE being an exception to the rule of increasing total review times) the
mean number of revisions authors were required to make was roughly constant at around
0.6 from 1940 to 1960, and then increased steadily to a level of about 2.0 today.
A striking observation from the old QJE records is that the QJE used to have four
categories of responses to initial submissions rather than two — papers were sometimes
accepted as is and "accept-but-revise" was a separate category that was more common than
"revise-and-resubmit." Of the articles published in 1960, for example, 12 were accepted
on the initial submission, 11 initially received an accept-but-revise and 5 a revise-and-
resubmil.^^ Marshall's (1959) discussion of a survey of twenty-six journal editors suggests
that the QJE/s practice of almost always making up or down decisions on initial submissions
(but sometimes using the accept-but-revise option) was the norm. Marshall never mentions
the possibihty of a revise-and-resubmit and says
The writer who submits a manuscript will normally receive fairly prompt notice
of an acceptance or rejection. Twenty-three [of 26] editors reported that they
gave notification one way or the other within 1 to 2 months, and only 2 editors
reported a time-lag of as much as 4 months or more. . . .The waiting period
between the time of acceptance and appearance in print can also be explained
in part by the necessity felt by many editors of having authors make extensive
revisions. Eighteen of the editors reported that major revisions were frequently
'^An indirect source of evidence I've found amusing is looking at the organization of journals' databases.
The JPE database, for example, was only designed to allow for information to be recorded on up to two
revisions and the editorial staff have had to improvise methods (including writing over the data on earlier
revisions and entering data into a "comments" field) for keeping track of the now not uncommon third and
further revisions.
^ The last two columns of the table are derived from the QJE's next-to-current computer database.
'^The fact that it took only three to four months to accept papers in the 1940's seems remarkable today
given the handicaps under which the editors worked. One example that had not occurred to me until
reading through the records is that requests for multiple reports on a paper were done sequentially rather
than simultaneously — there were no photocopy machines and the journal had to wait for the first referee
to return the manuscript before sending it to the second.
The 1970 breakdown was 3 accepts, 12 accept-but-revises, 9 revise-and-resubmits, and 1 reject (which
the author protested and eventually overturned on his third resubmission).
12
necessary. (p. 137)
The third row of the Table 2 illustrates that the growth in revisions at the QJE is even more
dramatic if one does not count revisions that occured after a paper was already accepted.
Table 2: Patterns of revisions over time at the Quarterly Journal of Economics
Year of pubhcation
1940
1950
1960
1970
1980
1985
1990
1995
1997
Mean submit-accept
time (months)
3.7
3.8
3.6
8.1
12.7
17.6
22.0
13.4
11.6
Mean number of
revisions
0.6
0.8
0.6
1.2
1.4
1.5
1.7
2.2
2.0
Mean # of revisions
before acceptance
0.4
0.1
0.2
0.5
0.8
1.0
1.7
2.2
2.0
Mean author time
for first preaccept
revision (months)
1.4
2.1
2.0
2.1
3.0
4.2
3.6
4.1
4.7
The table reports statistics on the handling of articles (not including notes, comments and
replies) published in the QJE in the indicated years. The first row is the mean total time
between submission and final acceptance (including time spent waiting for and reviewing
revisions to papers which had received an "accept-but-revise" decision). The second is the
mean number of revisions authors made. The third is the same, but only counting revisions
that were made prior to any acceptance letter being sent (including "accept-but-revise"').
The fourth is the mean time between an author being sent a "revise-and-resubmit" letter
for the first time on a paper and the revision arriving at the journal office.
Data on the breakdown of total submit-accept times at the JPE provides some indirect
evidence on the gxowth of revisions. Table 3 records for each year since 1979 the mean
submit-accept time at the JPE and the breakdown of this time into time with the editors
awaiting a decision letter, time with the authors being revised and time spent waiting
for referees' reports. The amount of time papers spend with the editors has increased
dramatically from about two months in 1979 - 1980 to more than seven in the most recent
years. Some of this increase may be due to editors devoting less effort to keeping up with
"'^Marshall's (1959) use of the term "major revision" is clearly different from how it would be understood
today. The time necessary for authors to make these revisions and for journals to approve them are part
of the acceptance-publication lags in his data. While he estimates that journals need "about 3 months to
'produce' an issue after all of the editorial work on it has been completed" and papers undoubtedly spend
two or more months on average waiting in the queue for the next available slot in the journal (the delay
would be one-and-a-half months on average at a quarterly journal even if there were no backlog at all), only
ten of the twenty-six journals in his sample had lags between acceptance and publication of 7 months or
more.
13
the flow of papers. The total amount of time a paper spends with the editors, however,
is the product of amount of time a paper spends with the editors on each round and the
number times it is revised. My guess would be that a substantial portion of the increase is
attributable to the average number of rounds having increased. Again, part of the increase
may also reflect editors waiting longer to write letters because they must clear a larger
block of time to contemplate longer referee reports, to describe more extensive revisions,
and/or to evaluate more substantial revisions.
Table 3: A breakdown of submit-accept times for the Journal of Political Economy
Breakdown of mean
submit-accept time
Year o
" publication
1979
1980
1981
1982
1983
1984 1985
1986
1987
1988
1989
Total time
7.8
9.5
11.0
9.9
10.1
14.5 11.8
13.4
13.6
15.0
17.4
with editors
1.8
2.3
3.2
3.4
3.4
4.8 4.9
4.6
5.0
6.4
4.7
with authors
3.3
4.1
4.4
4.3
3.8
5.5 3.9
5.1
4.9
4.7
7.1
with referees
2.7
3.1
3.4
2.2
2.9
4.3 3.0
3.7
3.7
3.8
4.6
Year of publication
1990
1991
1992
1993
1994
1995 1996
1997
1998
1999
Total time
13.3
14.3
14.8
17.3
16.1
17.5 19.8
16.5
20.0
20.3
with editors
3.6
4.2
4.4
5.8
6.5
6.1 7.4
6.8
8.4
7.4
with authors
4.9
6.1
6.0
6.5
4.7
6.5 7.5
3.9
6.7
6.6
with referees
4.8
4.0
4.3
4.9
4.9
5.2 5.0
5.8
5.0
6.2
The table reports the mean submit-accept time in months and two components of this time
for papers published in the JPE in the indicated years. The figures were obtained from
annual reports of the journal.
The data on submit-accept times at the top finance journals (some of which is in Table
1) provides another illustration of a trend toward more revisions. While the Journal of
Financial Economics is rightfully proud of the fact that its median first response time in
1999 was just 34 days (as it was when first reported in 1976), the trend in the journal's
mean submit-accept times is much like those at top economics journals. Mean submit-accept
times have risen from about 3 months in 1974 to about 15 months in 1999."° Similarly, the
Journal of Finance had a median turnaround time of just 41 days in 1999, but its mean
submit-accept time has risen from 6.5 months in 1979 to 18.6 months in 1999. ■^^
■^°The JFE only reports submission and final resubmission dates. The mean difference between these
was 2.6 months in 1974 (the journal's first year) and 14.8 months in 1999. Fourteen of the fifteen papers
published in 1974 were revised at least once.
^'The distribution of submit-accept times at the JF is skewed by the presence of a few papers with very
long lags, but the median is still 15 months. Papers that ended up in its shorter papers section had an even
14
A second factor contributing to the increase in submit-accept times is that authors are
taking longer to revise their papers. The best data source I have on this is again the QJE
records. The final row of Table 2 reports the mean time in months between the issuance
of a "revise-and-resubmit" letter in response to an initial submission and the receipt of the
revision for papers published in the indicated year.^^ The time spent doing first revisions
has increased steadily since 1940. Authors were about one month slower in 1980 than in
1970 and about one and a half months slower in the mid 1990s than in 1980. How much
of this is due to authors being asked to do more in a revision and how much is due to
authors simply being slower is impossible to know given the data limitations. The fact that
authors of the 1940 papers that were revised took only 1.4 months on average to revise
their manuscripts (including the time needed to have them retyped and waiting time for
the mail in both directions) suggests that the revisions must have been less extensive than
today's. The other source of information on authors' revision times available to me is the
data from the JPE in Table 3. This data mixes together increases in the time authors spend
per revision and increases in the number of revisions authors are asked to make. There is a
lot of variability from year to year, but the total time authors take revising seems to have
increased by about two and a half months since 1980.
While journals are only taking a little longer to review initial submissions, my impression
is that they are taking much longer to review resubmissions (although I lack data on this).
I do not, however, think of this as a fundamental cause of the slowdown. Instead, I think
of it as a reflection of the fact that first resubmissions are no longer thought of as final
resubmissions. My guess is that review times for final resubmissions have not changed
much.
A final possibility is that increases in first review times are a larger portion of the
overall increase in submit-accept times than is suggested by the data in Figure 3. Mean
first response times for accepted papers can be substantially different from the mean first
responses for rejected papers. Table 4 compares the first response time conditional on
eventual acceptance to more standard "unconditional" measures at the QJE and JPE. At
the QJE the two series have been about a month apart since 1970, and it does not appear
that there are any trends in the difference between the two series. At the JPE the differences
longer lag: 23.2 months on average.
^^I do not include in the sample revisions which were made in response to "accept-but-revise" letters.
^■^In recent years a substantial number of submissions to the QJE have been rejected without using referees.
Td provide a more accurate picture of trends in referees' evaluation times I do not include the (very fast)
first response times for such papers from the QJE data for the years after 1993.
15
are much larger. While only recent data is available, slower mean first response times are
definitely a significant part of the overall slowdown. For papers published in 1979, the mean
submit-accept time was 7.8 months. This number includes an average of 3.3 months that
papers spent on authors' desks being revised, so the mean first response time conditional on
acceptance could not have been greater than 4.5 months and was probably at least a month
shorter. For papers published in 1995, the mean submit-accept time was 17.5 months and
the mean first response time was 6.5 months. Hence, the lengthening of the first response
probably accounts for at least one-quarter of the 1979-1995 slowdown.^''
Table 4: First response times for accepted and other papers
1
Sample of papers
Mean first
response time
in months
1970
1980
1985 1990
1992 1993
1994 1995
1996
1997
QJE: sent to referees
QJE: accepted
3.3
4.8
4.6
5.8
7.2 9.0
3.5 3.2
4.8 3.7
2.9
3.2
2.7
3.7
JPE: rejected
JPE: accepted
3.3 3.4
3.7 4.0
6.9 6.7
5.2
6.9 8.4
5.4
10.3
4.1
7.8
The table presents various mean first response times. The first row gives estimated means
(from a random sample) for papers (including those rejected without using referees) with
first responses in 1970 and 1980 and the true sample mean for all papers with first responses
in 1994 - 1997 (not including those rejected without using referees) by the QJE. The second
row gives mean first response times for papers that were eventually accepted. For 1970 -
1990 the means are for papers published in the indicated year; for 1994 - 1997 numbers are
means for papers with first responses in the indicated year and accepted prior to August
of 1999. The third row gives mean first response times for papers that were rejected on the
initial submission by the JPE in the indicated year. The fourth row gives the mean first
response time for papers with first responses in the indicated year that were accepted prior
to January of 1999.
Overall, I would conclude that some fraction of the slowdown in the publishing process
(perhaps a quarter at the JPE) is due to slower first responses. A larger part of the
slowdown appears to be attributable to a practice of asking authors to make more and
larger revisions to their papers.
''For papers published in 1997, the mean submit-accept time was 16.5 months and the mean first-response
time was 9.8 months — the majority of the 1980-1997 slowdown may thus be attributed to slower first
responses. It appears, however, that 1997 is outlier. One editor was very slow and the journal may have
responded to slow initial turnarounds by shortening and speeding up the revision process.
16
3 Costs and benefits of revisions
I now turn to the task of evaluating a number of potential explanations for the trends
discussed in the previous section. I begin with a simple set of arguments focusing on direct
changes in the costs and benefits of revising papers.
3.1 The potential explanation
The arguments I consider here are of the form: "Over the last three decades exogenous
change X has occured. This has reduced the marginal cost to authors of making revi-
sions and/or increased the marginal benefit to the profession of having papers revised more
extensively. Hence it is now optimal to have longer submit-accept times." The two en-
vironmental changes that seem most compelling as the X are improvements in computer
software and changes in how economics papers are disseminated.
Thirty years ago there were no microcomputers. Rudimentary word processing software
was available on mainframes in the 1960's, but until the late seventies or early eighties
revising a paper extensively usually entailed having it retyped. ^^ Running regressions was
also much more difficult. While some statistical software existed on mainframes earlier,
statistical packages, as we now understand the term, mostly developed during the 1970's.^^
The first spreadsheet, Visicalc, appeared in 1979. Statistical packages for microcomputers
appeared in the early eighties and were adopted very quickly. The new software must have
reduced the cost of revising papers. It seems reasonable to suppose that journals may have
increased the number of revisions they requested as an optimal response. This might or
might not be expected to lead to an increase in the amount of time authors spend revising
papers (depending on whether the increased speed with which they can make revisions
offsets their being asked to do more), but would result in journals spending more time
reviewing the extra revisions.
Thirty years ago most economists would not hear about new research until it was pub-
lished in journals. Now, with widely available working paper series and web sites, it can
be argued that jounals are less in the business of disseminating information and more in
the business of certifying the quality of papers. This makes timeliness of publication less
important and may have led journals to slow down the process and evaluate papers more
carefully. Even expositional issues can become more important: as long as the version that
thirty years ago would have appeared as the published version is now available as a working
^^ Smaller revisions were often accomplished by cutting and pasting.
^^For example, the first version of SAS (for mainframes running MVS/TSO) appeared in 1976.
17
paper readers are made unambiguously better off by delays to improve exposition. Those
who want to see the paper right away can look at the working paper and those who prefer
to wait for a more clearly exposited version (or who do not become interested until later)
will benefit from reading a clearer paper.
3.2 Evidence
While the stories above are plausible, I've found little evidence to, support them. First,
I've discussed the slowdown with editors or former editors of all of the top general interest
journals (and editors of a number of field journals) and none mentioned to me that increas-
ing the number of rounds of revision or lengthening the review process was a conscious
decision. Instead, even most long-serving editors seemed unaware that there had been sub-
stantial changes in the length of the review process. A few editors indicated that they felt
that reviewing papers carefully and maintaining high quality standards is a higher prior-
ity than timely publication and this justifies current review times, but this view was not
expressed in conjunction with a view that the importance of high standards has changed.
Overwhelmingly, editors indicated that they handle papers now as they always have.
Annual editor's reports provide a source of contemporary written records on editors'
plans. At the AER, most of the editor's reports from the post-Clower era simply note that
the mean time to publication for accepted papers is about what it was the year before.
These observations are correct and given that the tables only contain one year of data it is
probably not surprising that there is no evident recognition that when one aggregates the
small year-to-year changes they become a large event. No motivation for lengthening the
review process is mentioned. The standard table format in the unpublished JPE editors'
reports includes three to five years of data on submit-accept times. Perhaps as a result
the JPE reports do show a recognition of a continuing slowdown (although not of its full
long-run magnitude.) The editors' comments do not suggest that the slowdown is planned
or seen as optimal. For example, the 1981 report says,
The increase in the time from initial submission to final publication of accepted
papers has risen by 5 months in the past two years, a most unsatisfactory trend.
. . . The articles a professional journal publishes cannot be timely in any short
run sense, but the reversal of this trend is going to be our major goal.
The 1982, 1984 and 1988 reports express the same desire. Only the 1990 report has a
different perspective. In good Chicago style it recognizes that the optimal length of the
18
review process must equate marginal costs and benefits, but takes no position on what this
means in practice:
Is this rate of review and revision and publication regrettable? Of course, almost
everyone would hke to have his or her work pubhshed instantly, but we believe
that the referee and editorial comments and the time for reconsideration usually
lead to a significant improvement of an article. A detailed comparison of inital
submissions and printed versions of papers would be a useful undertaking: would
it further speed the editors or teach the contributors patience?
A second problem with the cost and benefit explanations I've mentioned is that they
do not seem to fit well with the timing of the slowdown, which I take to be a gradual
continuous change since about 1960. For example, the period from 1985 to 1995 had about
as large a slowdown as any other ten year period. Software can't really account for this,
because word processors and statistical packages had already been widely adopted by the
start of the period.^' Web-based paper distribution was not important in 1995 and paper
working paper series had been around for a long time before 1985.^® Another question that
is hard to answer with the cost and benefit explanations is why review times (especially for
theory papers) started to lengthen around 1960.
One question on which I can provide some quantitative evidence is the difference in
trends for theoretical and empirical papers. Since revising empirical papers has been made
easier both by improvements in word processing and by improvements in statistical pack-
ages, the cost of revision argument suggests that empirical papers may have experienced a
greater slowdown than theory papers.
I have data on submit-accept times (or submit-final resubmit times) for over 5500 articles
published since 1970. This includes most articles published in Econometrica, REStud and
B.EStat, papers pubhshed in the JP£' and AER in 1993 or later, papers published in the QJE
in 1973-1977, 1980, 1985, 1990 or since 1993, and papers in the RAND Journal of Economics
since 1986. The data stop at the end of 1997 or the middle of 1998 for all journals. I had
research assistants inspect more than two thousand of the papers and classify them as
theoretical or empirical. ^^ For the rest of the papers I created an estimated classification
by defining a continuous variable, Theory, to be equal to the mean of the theory dummies
^^Later improvements have incorporated new features and make papers look nicer, but have not funda-
mentally changed how hard it is to make revisions.
^^For example, the current NBER working paper series started in 1973.
^^The set consists of most papers in the 1990's and about half of the 1970's papers.
19
of papers with the same JEL code for which I had data.^°
One clear fact in the data is that authors of theoretical papers now face a longer review
process. In my 1990's subsample I estimate the mean submit-accept time for theoretical
papers to be 22.5 months and the mean for empirical papers to be 20.0 months. This should
not be surprising. We have already seen that Econometrica and REStud have longer review
processes than the other journals and these journals publish a disproportionate share of
theoretical papers. If one views differences across jomrnals as likely due to idiosjmcratic
journal-specific factors and asks how review times differ within each journal, the answer is
that there are no large differences. In regressions with journal fixed effects, journal specific
trends and other control variables, the Theory variable is insignificant in every decade. ^^
Certainly, there is no evidence of a more severe slowdown for empirical papers.
Overall, I feel that there is little evidence to suggest that the slowdown is an optimal
response to changes in the costs of revisions and the benefits of timely publication.
4 Democratization
I use the term "democratization" to refer to the idea that the publishing process at top
journals may have become more open and meritocratic over time.^^ For a number of
reasons, such a shift might lead to a lenghtening of the review process. In this section, I
examine these explanations empirically^ I find little evidence that a democratization has
taken place, and also find little evidence of cross-sectional patterns that would be expected
if the slowdown were linked to democratization.
4.1 The potential explanation
The starting point for democratization explanations for the slowdown is an assumption
that in the "old days" , economics journals were more of an old-boys network and were less
concerned with carefully evaluating the merits of submissions than they are today.^'^ There
are a number of reasons why such a shift might lead to a slowdown.
On average 83% of papers in a JEL code have the modal classification.
^'Looking journal-by-journal in the 1990's, theory papers have significantly shorter review times at the
AER (the coefficient estimate is -140 days with a t-statistic of 3.0) and at least moderately significantly
longer review times at Econometrica (coef. est. 120, t-stat. 1.8) and RAND (coef. est. 171, t-stat. 2.3).
See Section 4.2 for a full description of the regressions.
"^^Such a change could have occurred in response to changes in general societal norms, because of an
increased fear of lawsuits or for other reasons.
■ Certainly some aspects of the process in the old days look less democratic. For example, in the 1940's
the QJE editorial staff kept track of referees using only initials. Presumably this was sufficient because most
(or all) of the referees were in Littauer.
20
First, carefully reading all of the papers that are submitted to a top economics journal
is a demanding task. If in some earlier era editors did not evaluate papers as carefully and
instead accepted papers by famous authors (or their friends), all papers could be reviewed
more quickly.
A democratization could also lead to higher mean submit-accept times by lengthening
review times for some authors and by changing the composition of the pool of accepted
papers. An example of an effect of the first type would be that authors who previously
enjoyed preferential treatment would presumably face longer delays. A more open review
process might change the composition of top journals, for example, by allowing more authors
from outside top schools or from outside the U.S. to publish and by reducing the share of
privileged authors. Authors who are not at top schools may have longer submit-accept
times because they have fewer colleagues able to help them improve their papers prior to
submission and because they are less able to tailor their submissions to editors' tastes.
Authors who are not native English speakers may have longer submit-accept times because
they need more editorial input at the end of the process to improve the readability of their
papers.
4.2 Evidence on democratization
I examine the idea that a democratization of the publication process has contributed to the
slowdown in two main steps: first looking at whether there is any evidence that publication
has become more democratic over the period and then looking for evidence of connections
between democratization and submit-accept times.
4.2.1 Has there been a democratization? Evidence from the characteristics of
accepted papers
The first place that I'll look for quantitative evidence on whether the process has become
more open and meritocratic since 1970 is in the composition of the pool of accepted papers.
A natural prediction is that a democratization of the review process (especially in combi-
nation with the growth of the profession) should reduce the concentration of publication.^^
The top X percent of economists would presumably capture a smaller share of publications
in top journals as other economists are more able to compete with them for scarce space,
^''Of course this need not be true. For example it could be that the elite received preferential treatment
under the old system but were writing the best papers anyway, or that more meritocratic reviews simply
lead to publications being concentrated in the hands of the best authors instead of the most famous authors.
A possibility relevant to school-level concentration is that the hiring process at top schools may have become
more meritocratic and led to a greater concentration of talent.
21
and economists at the top TV schools would presumably see their share of publications de-
cline as economists from lower ranked institutions are able to compete on a more equal
footing and grow in number.
The first two rows of Table 5 examine changes over time in the author-level and school-
level concentration of pubhcation in top general interest journals. The first row gives the
herfindahl index of authors' "market shares" of all articles in the top five general interest
journals in each decade, i.e. it reports J2a -^at where Sat is the fraction of all articles in decade
t written by author a.^^ A smaller value of the herfindahl index indicates that publication
was less concentrated. The data indicate that there was a small increase in concentration
between the 1970's and the 1980's and then a small decline between the 1980's and 1990's.
Despite the growth of the profession, the author-level concentration of publication in the
1990's is about what it was in the 1970's.
Table 5: Trends in authorship at top five journals
Decade
1950's
1960's
1970's
1980's
1990's
Author-level herfindahl
.00135
.00148
.00133
Percent by top 8 schools
36.5
31.8
27.2
28.2
33.8
Harvard share of QJE
14.5
12.3
12.7
6.4
12.5
Chicago share of JPE
15.6
10.6
11.2
7.0
9.4
Non-English name share
26.3
25.2
30.6
Percent female
3.5
4.5
7.5
The first row of the table reports the herfindahl index of author's share of articles in five
journals: AER, Econometrica, JPE, QJE and REStud. The second row gives the percent of
weighted pages in the AER, JPE, and QJE by authors from the top eight schools for that
decade. The third and fourth rows are percentages of pages with fractional credit given
for coauthored articles. The fifth and sixth rows give the percent of articles in the top five
journals written by authors with first names which were classified as indicating that the
author was a non-native English speaker and a woman, respectively.
While my data do not include authors' affiliations for pre-1989 observations, I can
examine changes in the school-level concentration of publication by comparing data for the
1990's with numbers for earlier decades reported by Siegfried (1994).^^ The second row
''^Note that here I am able to make use of all articles that appeared in the AER, Econometrica, JPE,
QJE, and REStud between 1970 and some time in 1997-1998. Each author is given fractional credit for
coauthored articles. I include only regular articles, omitting where I can shorter papers, notes, comments,
etc. as well as articles in symposia or special issues, presidential addresses, etc.
■^"^Some of the numbers in Siegfried (1994) were in turn directly reprinted from Cleary and Edwards (1960),
Yotopoulos (1961) and Siegfried (1972).
22
of Table 5 reports the weighted fraction of pages in the AER, QJE, and JPE written by
authors from the top eight schools. "^^ The numbers point to an increase in school-level
concentration, both between the 1970's and the 1980's and between the 1980's and the
1990's.'^^ I have included the earlier decades in the table because I thought that they
suggest a reason why the impression that the profession has opened up since the "old days"
is fairly widespread. There was a substantial decline in the top eight schools' share of
publications between the 1950's and the 1970's.
The remainder of Table 5 examines other trends that may relate to democratization.
Rows 3 and 4 also piggyback on Siegfried's (1994) work to examine trends in the (page-
weighted) share of articles in the JPE and QJE written by authors at the journal's home
institution. In each case the substantial decline between the 1970's and the 1980's noted
by Siegfried was followed by a substantial increase between the 1980's and the 1990's. As
a result, the QJE has about the same level of Harvard authorship in the 1990's as in the
1970's, while the JPE has somewhat less of a Chicago concentration. While the JPE trend
could be taken as indicative of a democratization of the JPE, the fact that the combined
share of AER, QJE and JPE pages by Chicago authors has declined only slightly between
the 1970's and 1990's suggests that it is more likely attributable to an increase in Chicago
economists' desire to publish in the QJE.^^
The final two rows of the table report estimates of the fraction of articles in the top
five general interest journals written by non-native English speakers and by women. The
estimates for all three decades were obtained by classifying authors on the basis of their
first names. '*° Each group has increased their share of publications, but as a fraction of the
•'^Following Siegfried the "top eight" is defined to be the eight schools with the most pages in the three
journals in the decade. For the 1990's this is Harvard, Chicago, MIT, Princeton, Northwestern, Stanford,
Pennsylvania and UC-Berkeley. Differences between this calculation and other calculations Pve been car-
rying out include that it does not include publications in Econometrica and REStud, that it is based on
page-lengths not numbers of articles (with pages weighted so that JPE and QJE pages count for 0.707 and
0.658 AER pages, respectively) and that it includes shorter papers, comments, and articles in symposia and
special addresses (but still not replies and errata). One departure from Siegfried is that I always assign
authors to their first affiliation rather than splitting credit for authors who list affiliations with two or more
schools.
^^Most of the increase between the 1980's and 1990's is attributable to the top three schools' share of
the QJE having increa.sed from 15.7 percent to 32.2 percent. The increase from the 1970's to the 1980's,
however, is in a period where the top eight schools' share of the QJE was declining, and there is still in
increase between the 1980's and 1990's if one removes the QJE from the calculation.
I measure Chicago's combined share of the three journals in the 1990's as 6.0 percent compared to 6.4
percent reported by Siegfried (1994) for the 1970's. Chicago's share of QJE pages was 1.1 percent in the
1970's and 8.8 percent in the 1990's.
'' ! assigned gender and native English dummies to all first names (or middle names following an initial)
that appeared in the data. Authors who gave only initials are dropped from the numerator and denominator.
This process doubtless produced a number of errors, so I would be hesitant to regard the levels (as opposed
23
total author pool the changes are small.
My conclusion from Table 5 is that is hard to find much evidence of a democratization
of the review process in the composition of the pool of published papers.
4.2.2 Evidence from cross-sectional variation
I now turn to the paper-level data. To examine whether there has been a democratization
the obvious thing to do with this data is to look for evidence that high status authors were
favored in the earlier years. The most relevant question that I can address here is whether
papers by high status authors that were accepted made it through the review process more
quickly.'*^ I discuss a number of variables that may (among other things) proxy for high
status: publications in Brookings Papers on Economic Activity and the AER's Papers and
Proceedings issue, publications in earlier decades, institutional affiliation and current decade
research productivity.
Before discussing the results, I will take some time to provide more detail on set up that
is common to all of the regression results I'll discuss in the paper. As mentioned above,
I have obtained data on submit-accept times for most papers published in Econometrica,
REStud and REStat, papers published in theJEE and AER since 1992 or 1993, and papers
published in the QJE in 1973-1977, 1980, 1985, 1990, and since 1993. The data end at
the end of 1997 or the middle of 1998 for all journals. I will estimate separate regressions
for each decade. I include papers in REStat in the 1970's sample, but not in subsequent
decades. Estimates should be regarded as derived from a large subset of the papers in the
1990's and from smaller and less representative subsamples in the 1970's and (especially)
the 1980's. The sample includes only standard full-length articles omitting (when feasible)
shorter papers, comments, replies, errata, articles in symposia or special issues, addresses,
etc.
Summary statistics on the sets of papers for which data is available in each decade are
presented in Table 6. The summary statistics for the journal dummy variables provide a
more complete view of what is in the sample in each decade. I have omitted summary
statistics on the dummy variables that classify papers into fields. More information on how
this was done is given in Section 5.2.2. A deca.de-by-decade breakdown of the fraction of
papers in the top five journals which are classified as belonging to each field is given in
to the trends) as meaningful.
'"The question one might most hke to ask is whether papers by high status authors are more likely to be
accepted holding paper quality fixed. This, however, is not possible — I know little or nothing about the
pool of rejected papers at top journals.
24
Appendix B^^ I will not give the definitions of all of the variables here, but will instead
discuss them in connection with the relevant results.
Table 6: Summary statistics for submit-accept time regressions
Variable
Samp]
e
1970'
3
1980's
1990'
s
Mean
SD
Mean
SD
Mean
SD
Lag
300.14 220.27
498.03 273.84
659.55 360.90
AuBrookP
0.07
0.45
0.06
0.30
0.09
0.35
AuPSzP
0.19
0.49
0.27
0.70
0.36
0.78
AuTop5Pub.s70s
2.20
2.51
1.02
1.77
0.43
1.34
SchoolTop5Pubs
—
—
—
35.47
32.07
AuTopbPubs
2.20
2.51
2.55
1.87
1.89
1.36
EnglishName
0.65
0.45
0.67
0.43
0.66
0.41
Female
0.03
0.16
0.04
0.17
0.07
0.22,
UnknownN ame
0.09
0.29
0.02
0.15
0.01
0.11
JournalHQ
—
—
—
—
0.08
0.27
NuniAuthor
1.39
0.60
1.49
0.64
1.73
0.71
Pages
13.09
6.37
17.43
7.52
24.20
8.72
Order
6.81
4.08
6.40
3.70
5.39
3.15
I.og{l + Cites)
2.52
1.31
2.91
1.28
2.33
1.03
Editor Distance
0.81
0.25
AER
0.00
0.00
0.00
0.00
0.18
0.38
Econometrica
0.41
0.49
0.56
0.50
0.26
0.44
JPE
0.00
0.00
0.00
0.00
0.18
0.38
QJE
0.09
0.28
0.06
0.23
0.18
0.38
REStud
0.24
0.43
0.38
0.49
0.20
0.40
REStat
0.26
0.44
0.00
0.00
0.00
0.00
Number of obs.
1564
1154
1413
Sample coverage
51%
44%
74%
The table reports summary statistics for the 1970's, 1980's and 1990's regression samples.
The dependent variable for the regressions, Lag, is the length of time in days between
the submission of a paper and its final acceptance (or a proxy for this).^'^ I use a number
of variables to look for evidence that papers by high status authors are accepted more
^^The means reported in the table in Appendix B differ from the means of the field dummies in the
regression samples because they are computed for all full-length articles in the top five journals regardless
of whether some data was unavailable and because they do not include data from REStat.
"Because of data limitations I substitute the length of time between the submission date and the date of
final resubmission for papers in Econometrica and for pre-1975 papers in REStud. The 1973-1977 QJE data
use the time between submission and a paper receiving its initial acceptance (which was not infrequently
followed by a later resubmission).
25
quickly. The first two, AuBrookP and AuP&!:P are average number of papers that the
authors pubhshed in Brookings Papers on Economic Activity and the AEWs Papers and
Proceedings issue in the decade in question.''^ Papers pubhshed in these two journals are
invited rather than submitted, making them a potential indicator of authors who are well
known or well connected.''^ Estimates of the relationship between publication in these
journals and submit-accept times during the 1970's can be found in column 1 of Table 7
The estimated coefficients on AuBrookP and AuPSzP are statistically insignificant and of
opposite signs. They provide little evidence that "high status" authors enjoy substantially
faster submit-accept times. The results for the 1980's and 1990's in the other two columns
are qualitatively similar. The estimated coefficients are always insignificant and the point
estimates on the two variables have opposite signs.
A second idea for constructing a measure of status is to use publications in an earlier
period. Unfortunately, I am limited by the fact that my database of publications (obtained
from Econlit) only starts in 1969. I am able to include publications in the top five journals
in the 1970's, AuTop5Pubs70s, as a potential indicator of high status in the 1980's and
1990's regressions.'*^ The coefficient estimates for this variable in the two decades, reported
in columns 2 and 3 of Table 7, are very small and neither is statistically significant.
A third potential indicator of status is the ranking of the institution with which an
author is affiliated. Here I am even more limited in analyzing the "old days" in that
my data do not start until the 1990's. I do include in the 1990's regression a variable,
SchoolTopbPuhs, giving the total number of articles by authors at the author's institution
in the 1990's.''^ The distribution of publications by school is quite skewed. The measure
''''More precisely author-level variables are defined first by taking simple counts (not adjusted for coau-
thorship) of publications in the two journals. Article-level variables are then defined by taking the average
across the authors of the paper. Here and elsewhere throughout the paper I lack data on all but the first
author of papers with four or more authors.
^^To give some feel for the variable, the top four authors in Brookings in 1990 - 1997 are Jeffrey Sachs,
Rudiger Dornbush, Andrei Shleifer and Robert Vishny, and the top four authors in Papers and Proceedings
ill 1990 - 1997 are James Poterba, Kevin Murphy, James Heckman and David Cutler. Another justification
for the status interpretation is that both AuBrookP and AuP&cP are predictive of citations for papers in
my dataset.
This variable is defined using my standard set of five journals, giving fractional credit for coauthored
papers, and omitting short papers, comments, papers in symposia, etc. The variable is first defined each
authors and I create a paper-level variable by averaging these values across the coauthors of a paper.
*'This variable is defined using my standard set of five journals, giving fractional credit for coauthored
papers, and omitting short papers, comments, papers in symposia, etc. The variable is first defined each
authors and I create a paper-level variable by averaging these values across the coauthors of a paper. Each
author is regarded as having only a single affiliation for each paper, which 1 usually take to be the first
affiliation listed (ignoring things like "and NBER", but also sometimes names of universities that may
represent an author'shome or the institution he or she is visiting). Many distinct affiliations were manually
combined to avoid splitting up departments from local research centers, and to correct misspellings and
26
Table 7: Basic submit- accept time regressions
Variable
Sample
1970'
S
1980's
1990's
Coef. T-stat.
Coef. T-stat
Coef. T-stat
AuBrookP
15.4
1.15
-27.7 0.90
-26.2 1.24
AuPkP
-5.0
0.39
16.2 1.09
16.9 1.33
AuTop5Pubs70s
—
—
1.5 0.30
4.1 0.59
SchoolTop5Pubs
—
—
— —
-0.3 0.92
AuTop5Pubs
-6.9
2.54
-2.3 0.46
-16.3 2.15
EnglishN ame
1.3
0.09
4.3 0.22
-2.4 0.11
Female
-37.3
1.10
-56.9 1.25
49.0 1.11
UnknownName
3.1
0.14
-9.8 0.18
-5.2 0.06
JournalHQ
—
—
— —
7.9 0.22
NumAuthor
-21.8
2.38
16.1 1.23
23.1 1.68
Pages
5.5
5.55
5.0 3.93
5.4 4.35
Order
1.8
1.29
4.9 2.06
8.6 2.69
log{l + Cites)
-21.4
4.83
-11.8 1.65
-38.8 3.67
Journal Dummies
Yes
Yes
Yes
Journal Trends
Yes
Yes
Yes
Field Dummies
Yes
Yes
Yes
Number of obs.
1564
1154
1413
R-squared
0.12
0.10
0.19
The table presents estimates from three regressions. The dependent variable for each re-
gression, Lag, is the length of time between the submission of a paper to a journal and its
acceptance in days (or a proxy). The samples are subsets of the set of papers published in
the top five or six general interest economics journals between 1970 and 1998 as described
in the text and in Table 6. The regression is estimated separately for each decade. The
independent variables are characteristics of the author(s) and the paper. All regressions
include journal dummies, journal-specific linear time trends, and dummies for seventeen
fields of economics. Coefficient estimates are presented along with the absolute value of the
t-statistics for the estimates.
27
is about 100 for each of the top three schools, but only five other institutions have values
above 35 and only fourteen have values between 20 and 35. The fact that economists at
the top schools have a substantial share of all publications, however, results in the mean
of SchoolTop5Pubs being 35.5. While we are all aware that the most "highly ranked"
departments are not always the most productive, productivity does look to be very highly
correlated with prestige in my data.''^ The estimated coefficient on SchoolTopbPubs in
column 3 of Table 7 indicates that authors from schools with higher output had their
papers accepted slightly more quickly, but that the differences are not significant. The
coefficient estimate of -0.32 is relatively small — such an effect would allow economists at
the top schools to get their papers accepted about one month faster than economists from
the bottom schools. ''^ While this can not tell us about whether a position at a top school
conferred a status advantage in the 1970's it does confirm that the compositional argument
that mean times might be longer now because the pool of published papers has shifted to
include more economists from lower ranked schools is not important.^
I have also included one additional variable in the regressions, AuTopbPubs, that may
proxy for status, but which is more difficult to interpret. The variable is the average
number of articles that a paper's authors published in top five journals in the decade in
question. ^^ Authors who are publishing more in top journals may be regarded as having
high status. Anj' negative relationship between AuTopbPubs and Lag may also be given
an endogeneity interpretation. The authors who are able to publish a lot of papers in top
journals will disproportionately be those who (whether by luck, hard work, or ability) are
very efficient at getting their papers through the journals and thus have the time to write
more papers. The regression results provide fairly clear evidence that authors who are more
successful in a decade in getting their papers in the top journals are also getting their papers
accepted more quickly. The estimates on AuTop5Pubs are negative in all three decades,
and the 1970's and 1990's estimates are highly significant. While the estimated coefficient
for the 1990's is about two and a half times as large as the estimated coefficient for the
variations in how names are reported, but this is a difficult task and some errors surely remain, especially
at foreign institutions. Different academic units within the same university are also combined.
"* For example, the top ten schools in the 1990's according to the measure are Harvard, MIT, Chicago,
Northwestern, Princeton, Stanford, Pennsylvania, Yale, UC-Berkeley and UCLA. The second ten are
Columbia, UCSD, Michigan, Rochester, the Federal Reserve Board, Boston U, NYU, Tel Aviv, Toronto
and the London School of Economics.
^®To the extent that there is a relationship betweeen the school productivity variable and submit-accept
times it looks very linear.
Of course, we already knew this because we saw that there has not been a shift in the pool of accepted
papers in the direction of including more economists from outside the top schools.
^' Again, fractional credit is given for coauthored papers.
28
1970's, given that mean submit-accept times are more than twice as long in the 1990's as
in the 1970's the results can be thought of as indicating that this effect is of roughly the
same magnitude throughout the period. The constancy of the effect does make me feel
comfortable in concluding that if the results are indicative of a status benefit, they are
reflecting a benefit which has not declined over time. I take a general theme from these
regressions to be that it is hard to find any evidence of a democratization in looking at
which authors had faster submit-accept times in the 1970's, 1980's and 1990's.
A second motivation for looking at the cross-sectional pattern of submit-accept times (in
addition to simply asking whether there was a democratization) is to examine the arguments
for why mean submit-accept times might have increased if there was a democratization.
I have already addressed two. First, I found no evidence that mean submit-accept times
have increased because high status authors used to enjoy prestige benefits and now do not.
Second, the fact that there is not much of a relationship between submit-accept times and
school rankings is inconsistent with the idea that mean review times might have increased
because more authors are from lower ranked schools and get less help from their colleagues.
To test one other potential explanation I included in the regressions a variable for whether
the authors of a paper have first names suggesting that they are native English speakers,
EriglishN ame}^ Estimated coeflncients on this variable are extremely small and highly
insignificant in each decade. I already mentioned that the increase in the number of non-
native English speakers pubhshing in the top journals has been slight. Together these
results clearly indicate that the idea that more a.uthors today may be non-native Enghsh
speakers who need more editorial help is not relevant to understanding the slowdown.
5 Complexity and specialization
This section examines a set of explanations for the slowdown of the economics publishing
process based on the common perception that economics papers are becoming more com-
plex and the field more specialized. In general, I find little evidence that the profession
has become more specialized and also find few of the links necessary to make increased
complexity a candidate explanation for the slowdown. One connection I do find is that eco-
nomics papers are becoming longer and longer papers have longer submit-accept times in
the cross-section. This relationship might account for one to two months of the slowdown.
^^Here again I take an average of the authors' characteristics for coauthored papers. Switching to an
indicator equal to one if any author has a name associated with being a native Enghsh speaker does not
change the results.
29
5.1 The potential explanation
It seems to be a fairly common belief that economics papers have become increasingly
technical, sophisticated and specialized over the last few decades. There are at least three
reasons why such a trend could lead to a lengthening of the review process.
First, it may take longer for referees and editors to read and digest papers that are more
complex.
Second, increased complexity and specialization may make it necessary for authors to get
more input from referees. One story would be that increased complexity reduces authors'
understanding of their own papers, so that they need more help from referees and editors to
get things right. A related story I find more compelling is that in the old days authors were
able to get advice about expositional and other matters from colleagues. With increasing
speciahzation colleagues are less able to provide this service, and it may be necessary to
substitute advice from referees.
Third, increased complexity and specialization may lead editors to change the way they
handle papers. In the old days, this story goes, editors were able to understand papers
and digest referee reports, clearly articulate what improvements would make the paper
pubhshable, and then check for themselves whether the improvements had been made on
resubmission. Now, being less able to understand papers and referees' comments, editors
may be less able to determine and describe ex ante what revisions would make a paper
pubhshable, which leads to multiple rounds of revisions. In addition, as editors lose the
ability to assess revisions, more rounds must be sent back to referees, lengthening the time
required for each round.
5.2 Has economics become more complex and specialized?
Let me first suggest that for a couple of reasons we should not regard it as obvious that
economics has become more complex over the last three decades. First, by 1970 there
was already a large amount of very technical and inaccessible work being done, and the
1990's has seen the growth of a number of branches with relatively standardized easy-to-
read papers, e.g. natural experiments, growth regressions, and experimental economics. To
take one not so random sample of economists, the Clark Medal winners of the 1980's were
Michael Spence, James Heckman, Jerry Hausman, Sandy Grossman and David Kreps, while
the 1990's winners were Paul Krugman, Lawrence Summers, David Card, Kevin Murphy
and Andrei Shleifer.
Second, what matters for the explanations above is not that economics papers are more
30
complex, but rather that they are more difficult for economists (be they authors, referees
or editors) to read, write and evaluate. While the game theory found in current industrial
organization theory papers might be daunting to an economist transported here from the
1970's, it is second nature to researchers in the field today. In its February 1975 issue,
the QJE published articles by Joan Robinson and Steve Ross. The August issue included
papers by Nicholas Kaldor and Don Brown. To me, the range of skills necessary to evaluate
these papers seems much greater than that necessary to evaluate papers in a current QJE
issue.
5.2.] Some simple measures
In a couple of easily quantifiable dimensions, papers have changed in a manner consistent
with increasing complexity.
Figure 4 graphs the median page length of articles over time at the top general interest
journals. '^^ As noted by Laband and Wells (1998) there has been a fairly rapid growth in
the length of published papers since 1970. At the AER, JPE and QJE articles are now
about twice as long as they were in 1970. At Econometrtca and REStud articles are about
75% longer. Only REStat shows a more modest growth.
A second trend in economics publishing that has been noted elsewhere is an increase in
coautborship (Hudson, 1996). In the 1970's only 30 percent of the articles in the top five
journals were coauthored. In the 1990's about 60 percent were coauthored. In the longer
run the trend is even more striking: as recently as 1959 only 3 percent of the articles in
the Journal of Political Economy were coauthored. This trend could be indicative of an
increase in complexity if one reason that economists work jointly on a project is that one
person alone would find it difficult to carry out the range of specialized tasks involved.
While each of these changes could be indicative of an increase in complexity, other inter-
pretations are possible. One potential problem with the page length measure is that it may
reflect also the degree to which journals require authors to provide a detailed introduction,
give intuition for equations, survey related literatures, and do other things that are intended
to make papers easier to read rather than harder. Laband and Wells (1998) note that prior
to 1970 there had been a gradual but substantial trend toward shorter papers dating all the
To be precise, the figure and the discussion in this paragraph concerns the median of the page lengths
of articles which were among the first five in their issue. This measure was chosen to reflect the length of
a "typical" article in a way that would be unaff'ected by changes over time in the number of notes that are
published and in changing definitions of what constitutes a paper versus a note. I do not attempt to correct
for slight format changes instituted at the JPE in 1971 and at REStud in 1982 because my attempts to
count typical numbers of characters per page indicated that there were no substantial changes.
31
Median Article Lengths: 1969 - 1999
40 -r
1965 1970 1975 1980 1985
Year
— ♦— Econometrica
—A— Review of Economics and Statistics
—e— Quarterly Journal of Economics
Figure 4: Changes in page lengths over time
1990
1995
2000
■-*— Review of Economic Studies
-B— American Economic Review
-*— Journal of Political Economy
The figure graphs the median length in pages of articles that were among the first
five articles in their journal issue.
32
way back to the turn of the century. Hence, a second problem is that if one wants to regard
complexity as continually increasing, then one must argue that page lengths switched from
being negatively related to complexity to positively related in 1970. A troubling fact about
coauthorship as a measure of complexity is that in recent years coauthorship has been less
common at the Review of Economic Studies and Econometrica than at the AER, QJE and
JPE.^^
One additional (albeit somewhat circular) piece of evidence on complexity is the first
review times we saw earlier. Recall from Figure 3 that there has been only a small increase
in journals' first response times over the last fifteen or twenty years. If papers were now
more difficult to read, one might expect these times to have increased. ^^ The widening
gap between first response times for all papers and for eventually accepted papers at the
JPE may also be informative. It seems more likely that this reflects referees and editors
spending longer developing ideas for more substantial revisions than that there is a widening
gap between the complexity of accepted and rejected papers.
5.2.2 Measures of specialization
As noted above, the relevant notion of complexity for the stories told above is complexity
relative to the skills and knowledge of those in the profession. In this subsection, I look
for evidence of complexity in this sense, by examining the extent to which economists have
become more or less specialized over time. My motivation for doing so is the thought that
if there has been an increase in complexity that has made it more difficult for authors
to master their own work, for colleagues to provide useful feedback, and/or for editors to
digest papers, then economists should have responded by becoming increasingly specialized
in particular lines of research. I find little evidence of increasing specialization.
To measure the degree to which economists are specialized I use the index that Ellison
and Glaeser (1997) proposed to measure geographic concentration.^^ Suppose that a set of
^'The most obvious alternative to increasing complexity as the cause of increasing coauthorship is changes
in the .returns to writing coauthored papers. Sauer's (1988) analysis of the salaries of economics professors at
seven economics departments in 1982 did not support the common perception that the benefit an economist
receives from writing an n-authored is greater than l/n"" of the benefit from writing a sole authored paper.
^^As mentioned above it is not clear how closely review times and difficulty of reading should be linked
given that the time necessary to complete a review is a tiny fraction of the time referees hold papers. Another
possibility is that referees might respond to the increased complexity of submissions by reading papers less
carefully. This could also account for a trend toward more rounds of revisions, but I know of no evidence
to suggest that it is true.
^^The analogy with Ellison and Glaeser (1997) is to equate economists with industries, fields with geo-
graphic areas, and papers with manufacturing plants. See Stern and Trajtenberg (1998) for an application
of the index to doctors' prescribing patterns similar to that given here.
33
economics papers can be classified as belonging to one of F fields indexed by / = 1, 2, . . . , F.
Write Ni for the number of papers written by economist i, Sij for the share of economist
i's papers that are in field /, and xj for the fraction of all publications that are in field /.
The Ellison-Glaeser index of the degree to which economist i is specialized is
7^ = -^ + /^E(^^/--/)V(i-E4)
Under particular assumptions discussed in Ellison and Glaeser (1997) the expected value of
this index is unaffected by the number of papers by an author that we are able to observe,
and by the number and size of the fields used in the breakdown. The scale of the index
is such that a value of 0.2 would indicate that the frequency with which we see pairs of
papers by the same author being in the same field matches what would be expected if 20
percent of authors wrote all of their papers in a single field and 80 percent of authors wrote
in fields that were completely uncorrelated from paper to paper (drawing each topic from
the aggregate distribution of fields.)
I first apply the measure to look at the specialization of authors across the main fields
of economics. Based largely on JEL codes, I assigned the articles in the top five journals
since 1970 to one of seventeen fields. °' In order of frequency the fields are: microeconomic
theory, macroeconomics, econometrics, industrial organization, labor, international, public
finance, finance, development, other, urban, history, experimental, productivity, political
economy, environmental, and law and economics.
Table 8 reports the average value of the Ellison-Glaeser index (computed separately for
the 1970's, 1980's and 1990's) among economists having at least two publications in the
top five journals in the decade in question.^® The data in the first three columns indicate
that there has been only a very slight increase in specialization. The absolute level of
specialization also seems fairly low relative to the common perception.
The most obvious bias in the construction of this series is that with the advent of the
new JEL codes in 1991 I am able to do a better job of classifying papers into fields. ^^
Misclassifications will tend to make authors' publishing patterns look more random and de-
crease measured speciahzation. Hence, the calculations above may be biased toward finding
^^In a number of cases the JEL codes contain sets of papers that seem to belong to different fields. In
these cases I used rules based on title keywords and in some cases paper-by-paper judgements to assign
fields.
^®I take an unweighted average across economists, so the measure reflects the specialization of the large
number of economists who have a few top publications and gives less weight to people like Joseph Stiglitz
and Martin Feldstein than their share of publications would dictate.
^^A related bias is that it may be easier for me to divide papers into fields in the 1990's because my
understanding of what constitutes a field is based on my knowledge of economics in the 1990's.
34
Table 8: Specialization of authors across fields over time
Decade
1970's 1980's 1990's
Mean EG index
0.33 0.33 0.37
The table reports the mean value of the Ellison-Glaeser concentration index computed from
the decade-specific top five journal publication histories of authors with at least two papers
in the sample in the decade in question. Seventeen fields are used for the analysis. Data
for the 1990's includes data up to the end of 1997 or mid-1998 depending on the journal.
increased specialization. To assess the potential magnitude of this bias, I recomputed the
specialization index for the 1990's after reclassifying the 1990's papers using only the old
..TEL codes (and the same rules I had used for the earlier papers.) When I did this, the
measure of specialization in the 1990's declines to 0.31, a value which is below the level for
the 1970's and 1980's. I conclude that there is very httle if any evidence of a trend toward
increasing specialization across fields.
The results above concern specialization at the level of broad fields. A second relevant
sense in which economists may be specialized is within particular subfields of the fields in
which they work. To construct indices of within-field specialization, I viewed each field
of economics (in each decade) as a separate universe, and treated pre-1991 JEL codes
as subfields into which the field could be divided. I then computed Ellison-Glaeser indices
exactly as above on the set of economists having two or more publications in top five journals
in the field (ignoring their pubhcations in other fields). In the minor fields this would
have left me with a very small (and sometimes nonexistent) sample of economists. Hence,
I restricted the analysis to the seven fields for which the relevant sample of economists
exceeded ten in each decade and for which the subfields defined by JEL codes gave a
reasonably fine field breakdown: microeconomic theory, macroeconomics, labor, industrial
organization, international, public finance and finance. ^°
The results presented in Ta,ble 9 reveal no single typical pattern. In three fields, mi-
croeconomic theory, industrial organization and labor, there is a trend toward decreasing
within-field specialization. In two others, macroeconomics and public finance, there is a
substantial drop from the 1970's to the 1980's followed by a slight increase from the 1980's to
the 1990's. International economics and finance, in contrast, exhibit increasing within-field
The number of economists meeting the criterion ranged from 19 for finance in the 1970's to 264 for
theory in the 1980's. The additional restriction weis that I only included fields for which the herfindahl
index of the component JEL codes was below 0.5.
35
specialization.
Table 9: Within-field specialization of authors over time
Index of within-field
specialization
Field
1970's
1980's
1990's
Microeconomic theory
0.38
0.32
0.23
Macroeconomics
0.27
0.17
0.18
Industrial organization
0.35
0.30
0.11
Labor
0.27
0.22
0.09
International
0.25
0.35
0.36
Public Finance
0.50
0.28
0.30
Finance
0.29
0.20
0.41
The table reports the mean value of the Ellison-Glaeser concentration index computed
by treating publications in a field in the top five journals in a decade as the universe
and treating the set of distinct pre-1991 JEL codes of papers in the field as the set of
subfields. Values are the unweighted means of the index across authors with at least two
such publications. Data for the 1990's includes data up to the end of 1997 or mid-1998
depending on the journal.
Again, one potential bias in the time series is that I do a better job of classifying
papers after 1990. Misclassifications of papers into fields will tend to make within-field
specialization look higher. For example, if a JEL code containing a few macro papers
is put into micro theory, a few macroeconomists will be be added to the micro theory
population. Their publications in the micro theory universe will tend to be concentrated in
the misclassified JEL code. By improving the classification in the 1990's I may be biasing
the results toward a finding of reduced within-field specialization. To assess this bias I
again repeated the calculations after reclassifying the 1990's data using only the pre-1991
JEL codes. This change increased the measured within-field specialization for the 1990's
for all fields except public finance. In no case, however, did the ranking of 1970's versus
1990's specialization change. The largest change is in theory, where the 1990's value of the
speciahzation index, 0.36, becomes very close to its 1970's value.
A second potential bias is that the relevance of the subfields defined by JEL codes
changes over time. In some cases, such as the creation of new JEL codes for auction
theory and contract theory in 1982, the JEL codes themselves change in a way that make
them better descriptions of subfields. This would tend to make measured specialization
increase. In other cases, fields evolve in a way that causes the JEL codes to lose their
ability to describe meaningful subfields. In empirical industrial organization, for example.
36
the codes mostly describe the industry being studied, rather than the topic that is being
explored using the industry as an example or whether the author takes a reduced form or
structural approach. ^^ To get some idea of how this may affect the results, I constructed
my own breakdown of microeconomic theory into ten subfields. In order of frequency they
are: unclassified, price theory, general equilibrium, welfare economics, game theory, social
choice, contract theory, auctions, decision theory and learning. The classification is largely
made by combining JEL codes, but again I also in some cases use title keywords or case-
by-case decisions. Using these subfields, I find the within-theory specialization index for
the three decades to be 0.40, 0.28 and 0.45. (Here, the fact that my subfield classifications
improve over time may bias me toward finding increased specialization.)
Overall, I interpret the results of this section as indicating that there is little evidence
of a trend toward economists becoming more specialized.
5.2.3 Why might economists perceive that specialization has increased?
How can we reconcile the results of the previous section with a common perception that
economics is becoming increasingly specialized? One set of potential explanations is based
on the fact that economists and their positions v/ithin the profession change over time,
and judgements about changes in complexity are biased by changes in one's perspective.
One potential effect is that economists may invest heavily in knowledge capital at the
start of their career and then allow their knowledge to decay over time. They would then
correctly perceive themselves to understand less of the field over time, regardless of whether
the understanding of the profession as a whole has changed. Another source of bias may
be that what economists are asked to do changes over time. Initially, economists are only
asked to referee papers closely related to their work. Later, they are put in roles where they
read papers further from their specialty, e.g. reviewing colleagues for tenure and serving
on hiring committees. If they don't fully account for changes in the set of papers they
read, economists may perceive their ability to read papers to have diminished. Another
factor could be changing expectations that make economists more uncomfortable with a
lack of knowledge as they advance to higher positions. If economists form beliefs about
how complexity has changed by thinking of their recent observations of old papers another
bias is plausible. The old papers that economists encounter are a nonrandom sample of the
papers written at the time. They tend to be papers that have spawned substantial future
Another example is that a primary breakdown of microeconomic theory in the old codes is into consumer
theory and producer theory.
37
work. Such papers will be easier to understand today than when they were written.
5.3 Links between complexity and review times
In this section I'll put aside the question of whether economics papers really are becoming
more complex and discuss a few pieces of evidence on the question of whether an increase
in complexity would slow down the review process if it were occurring.
5.3.1 Simple measures of complexity
I noted earlier that papers have grown longer over time and that coauthorship is more fre-
quent. While it is not clear whether these changes are due to an increase in the complexity of
economics articles, it is instructive to examine their relationship with submit-accept times.
Two variables in the regression of submit-accept times on paper and author characteristics
in Table 7 are relevant.
First, Pages, is the length of an article in pages.® In all three decades, this variable has
a positive and highly significant effect.®^ The estimates are that longer papers take longer
in the review process by about five days per page. The lenghtening of papers over the last
thirty years might therefore account for two months of the overall increase in submit-accept
times. Alternate explanations for the estimate can also be given. For example, papers that
go through more rounds of revisions may grow in length as authors add material and
comments in response to referees' comments, or longer published papers may tend to be
papers that were much too long when first submitted and needed extensive editorial input.
It is also not clear whether increases in page lengths should be regarded as a root cause or
whether they are themselves a reflection of changes in social norms for how papers should
be written.
Second, NumAuthors is the number of authors of the paper. In the 1970's, coauthored
papers appear to have been accepted more quickly. In later decades coauthored papers
have taken slightly longer in the review process, but the relationship is not significant. I
would conclude that if the rise in coauthorship is due to the increased difficulty of writing
an economics papers, then in the cross-section any tendency of coauthored papers to be
more complex and take longer to review must be largely offset by advantages to the authors
of having multiple authors working on the paper.
^^Recall that the regression includes only full-length articles and not shorter papers, comments and replies.
^^This contrasts with Laband et al (1990) who report that in a quadratic specification the relationship
between review times and page lengths (for papers in REStat between 1970 and 1980) is nearly flat around
the mean page length. Hamermesh (1994) does report that referees take longer to referee longer papers in
his data, but the size of that effect (about 0.7 days per page) is too small to fully account for what I observe.
38
5.3.2 Specialization and advice from colleagues
In this section I focus on the second potential link between complexity and review times
mentioned above — that in an increasingly specialized profession authors will be less able
to get help from their colleagues. The data provides little support for this idea.
The argument above is based on an assumption that advice from colleagues is useful
and gives authors a headstart on the journal review process. If this were true, economists
from top departments should get their papers through the review process more quickly than
economists at departments which produce less research output. Economists at top schools
are more likely to have colleagues with sufRcient expertise in their area to provide useful
feedback than are economists in smaller departments or in departments where fewer of the
faculty are actively engaged in research.
Recall that I had earlier included the variable SchoolTop5Pubs in my basic regression of
submit-accept times on author and editor characteristics in the hope that it might reveal a
prestige advantage enjoyed by authors at top schools. The variable would also be expected
to have a negative sign if these authors enjoyed real advantages in the form of helpful advice
from colleagues. The fact that the t-statistic on the variable (in the third column of Table
7) is only 0.9 indicates that I do not find significant evidence that interacting with more
productive colleagues allows one to polish papers prior to submission and thereby reduce
submit-accept times. ^^
5.3.3 Specialization and editor expertise
In this subsection, I examine the argument that submit-accept times may lengthen as the
profession becomes more specialized because an editor with less expertise on a topic will
end up asking for more rounds of revisions and sending more revisions back to the referees.
This argument is certainly plausible, but the opposite eff'ect would be plausible as well.
Indeed, one editor remarked to me a few years ago that he felt that the review process
for the occasional international trade paper that he handled was less drawn out than for
papers in his specialty. The reason was that for papers in his specialty he would always
^■"While the regression provides no evidence of a relationship between affiliation on submit-accept times
as hypothesized above, it is interesting to note that authors from top schools do get their papers accepted
more quickly. In a univariate regression of submit-accept times on SchoolTopSPubs, the coefficient estimate
is -1.09 with a t-statistic of 3.66. Whether one regards this as indicating that the structure of the profession
puts economists at lower ranked schools at a disadvantage will depend on one's view of the QJE. The
measured effect drops in half when journal dummies and journal-specific time trends are included, and it
becomes insignificant when the other control variables (most notably the author's publication record) are
included. The primary reason for this is that the QJE has the fastest submit-accept times and the fraction
of papers coming from top schools is substantially higher there than at the other journals.
39
identify a number of ways in which the paper could be improved, while with trade papers
if the referees didn't have many comments he would just have to make a yes/no decision
and focus on the exposition.
My idea for examining the editor-expertise link between specialization and submit-
accept times is straightforward. I construct a measurement, EditorDistance, of how far
the paper is from the editor's area of expertise and include this in a submit-accept time
regression like those in Table 7.
The approach I take to quantifying how far each paper is from its editor's area of
expertise is to assign each paper i to a field /(?'), determine for each editor e the fraction
of his papers, Seg, falling into each field g, define a field-to-field distance measure, d{f,g),
and then define the distance between the paper and the editor's area of expertise by
EditorDistancei = /J Sg(i)gd(/(z),g).
9
When the editor's identity is not known, I evaluate this measure for each of the editors who
worked at the journal when the paper was submitted and then impute that the paper was
assigned to the editor for whom the distance would be minimized. ^'^
The construction of the field-to-field distance measure is based on the idea that two
fields can be regarded as close together if economists who write papers in one are also likely
to write in the other. Details on how this was done are reported in Appendix A. The
v/hole exercise may seem a bit far fetched, so I have also included a couple of tables in the
appendix designed to give an idea of how the measure is working: one lists the three closest
fields to each field; the other presents some examples of imputed editor assignments and
distances. I'd urge anyone interested to take a look.
Table 10 reports the estimated coefficient on EditorDistance in regressions of submit-
accept times in the 1990's on this variable and the variables in the basic regression of Table
7. To save space, I do not report the coefficient estimates for the other variables, which are
similar to those in Table 7.^^ The specification in the first column departs slightly from the
earlier regressions in that it employs editor fixed effects rather than journal fixed effects
and journal-specific trends. The coefficient estimate of -66.8 indicates that papers that are
further from the editor's area of expertise had slightly shorter submit-accept times (the
standard deviation oi EditorDistance is 0.25), but the effect is not statistically significant.
^^The data include the editor's identity only for papers at the JPE and papers at the QJE in later years.
All other editor identities are imputed.
^^The most notable change is in the coefficient on log{\ + Cites) increases to 65.9 and its t-statistic
increases to 6.6.5 while the coefficient on Order becomes smaller and insignificant. The interpretation of
these variables will be discussed in Section 7.2.2.
40
Table 10: Effect of editor expertise on submit-accept times
Dependent vari
able:
Independent
Variables
submit
-accept
time
(1)
(2)
(3)
Editor Distance
-66.8
-146.9
-22.4
(1.3)
(3.4)
(0.5)
Editor fixed effects
Yes
Yes
No
Field fixed effects
Yes
No
Yes
Journal fixed effects
No
No
Yes
and trends
Other variables
Yes
Yes
Yes
from Table 7
R-squared
0.30
0.27
0.19
The table reports the results of regressions of submit-accept times on the distance of a
paper from the editor's area of expertise. The sample consists of papers published in the
top five general interest journals in the 1990's for which the data is available. The dependent
variable is the time between a papers submission to the journal and its acceptance (or final
resubmission in the case of Econometrica) in days. The primary independent variable,
Editor Distance is a measure of how far the paper is from the editor's area of expertise
as described in the text and Appendix A. T-statistics are given in parentheses below the
estimates. The regression in column (1) has unreported editor and field fixed effects. The
regression in column (2) has editor fixed effects. The regression in column (3) has field and
journal fixed effects and field specific linear time trends. Each regression also includes the
same independent variables as in the regressions in Table 7.
41
The regression in the first column includes both editor and field fixed effects (for seven-
teen fields). In each case, one might argue that including the fixed effects ignores potentially
interesting sources of variation. First, some fields have been much better represented than
others on the editorial boards of top journals. For example, the AER, QJE, JPE and
Econometrica have all had labor economists on their boards for a substantial part of the
last decade, while I don't think that any editor (of forty two) would call himself an inter-
national economist. ^^ One could imagine that this might lead to informative differences
in the mean submit-accept times for labor and international papers that are ignored by
the field fixed-effects estimates. Column 2 of Table 10 reports estimates from a regression
which is like that of column 1, but omitting the field fixed effects. The coefficient estimate
for Editor Distance is now -146.9, and it is highly significant. Apparently, fields which are
well represented on editorial boards have slower submit-accept times. ^^
Column 3 of Table 10 reports on a regression that omits the editor fixed effects (and
includes journal fixed effects and journal specific linear time trends). The motivation for
this specification is that if editor expertise speeds publication then the editors of a journal
who handle fewer papers outside their area should on average be faster. The fact that
the coefficient on Editor Distance is somewhat less negative in column 3 than in column 1
provides only very weak support for this hypothesis.''^
Overall, I conclude that I have found little evidence of any mechanism by which increased
speciahzation would lead to a slowdown of the review process.
6 Growth of the profession
In this section I discuss the idea that the slowdown may be a consequence of the growth of
the economics profession. What I do and do not find about how the profession has changed
may be surprising. First, what I do not find is evidence that the profession has grown much
over the last thirty years or that many more papers are being submitted to top journals.
It is also true that none of the forty two are women. Nancy Stokey and Valerie Ramey did not start in
time to have any papers published before the end of my data.
^^A problem with trying to interpret this as indicating that economists in a field are made better or worse
off by being represented on editorial boards is that I can say nothing about the effect of editor expertise
on the likelihood of a paper of a particular quality being accepted. If such a relationship exists, the results
on submit-accept times may also reflect a selection bias to the extent that the mean quality of papers in
different fields differs.
One potential problem with trying to use the cross-editor variation in expertise is an endogeneity issue
— editors who handle a lot of papers outside their field may have gotten their jobs over editors who would
have been a better match for the submissions fieldwise because it was thought that they would do a good
job.
42
Hence, it does not appear that growth could have slowed the review process significantly
by increasing editorial workloads. Second, what I do find is over the last two decades the
top journals have grown substantially in their impact relative to other journals. Looking
at patterns across journals I estimate that the increased competition that this creates may
account for three months of the slowdown at the top journals.
6.1 The potential explanation
The starting point for the set of explanations I will discuss here is the assumption that
there has been a great deal of growth in the economics profession over time. There are
at least three main channels through which such growth might be expected to lead to a
slowdown of the publication process.
First, an increase in the number of economists would be expected to lead to an increase
in submissions, and thereby to increases in editors' workloads. Editors who are under time
pressure may be more likely to return papers for an initial revision without having thought
through what changes would make a paper publishable, and thereby increase the number
of rounds of revisions that are eventually necessary. They may also rely more on referees
to review revisions rather than trying to evaluate the changes themselves, which can lead
both CO more rounds and longer times per round.
Second, in the "old days" editors may have seen many papers before they were submitted
to journals. With the growth of the profession, editors may have seen a much smaller
fraction of papers prior to submission. Unfamiliar papers may have longer review times.
Third, growth would lead to more intense competition to publish in the top journals.
This would be expected to lead to an increase in overall quality standards. To achieve the
higher standards, authors may need to spend more time working with referees and editors
to improve exposition, clarify proofs, address alternate explanations, etc.^*^
6.2 Has the profession grown?
While my first inclination was to not even ask this question assuming that the answer was
obviously yes, evidence of substantial growth is hard to find.
First, recall from Table 5 that there has been little change since the 1970's in the
Herfindahl index of the author-level concentration of publication, which suggests that the
^°ElIison (2000) provides an example where the opposite change occurs in an equilibrium model of time
allocation. As the journal becomes more selective, authors gamble on increasingly bold ideas, and the polish
of the average accepted paper declines.
43
population of economists trying to publish in top journals is not providing more severe
competition for the top economists.
Second, as Siegfried (1998) has noted, counts of economists obtained from membership
rolls of professional societies or department faculty lists also indicate that the profession
has grown relatively slowly since 1970. Table 11 reports time series for the number of
members of the American Economic Association and Econometric Society.^^ Increases in
AEA membership since 1970 seem modest — the total increase over the last thirty years
is about 10 percent. Siegfried (1998) also counted the number of economics department
faculty members at 24 major U.S. universities at various points in time. In aggregate the
economics departments at these universities were slightly smaller in 1995 than in 1973. The
growth in the profession due to increases in the number of economists at business schools
and other institutions is presumably a large part of the difference between the overall AEA
membership increase and the slight drop in membership at the 24 economics departments
he examined.
Econometric Society membership has increased more substantially since 1980.'^ The
growth in individual memberships may overstate the growth in the number of economists
interested in the AER and Ecoiiometrica for a couple reasons. At both journals some of the
increase may be attributable to institutions switching subscriptions to individuals' names
(the gap between individual and institutional prices has widened and the decrease in the
institutional subscriber base is comparable to the increase in the individual total). The
price of Econometrica has also declined over time in real terms. ^'^
The number of U.S. members of the Econometric Society has only increased by about 10
percent between 1976 and 1998, so it may be tempting to try to reconcile the two series by
hj^othesizing that there has been relatively slow growth in the U.S. economist population,
but substantial overall growth due to a more rapid growth in the number of economists
outside the U.S. doing work that would be appropriate for top journals. This, however, is
at odds with the publication data. The increase in authors with foreign names comes from
'^^The table records the total membership of the AEA and the number of regular members of the Econo-
metric Society at midyear.
^^The earlier membership information is problematic. At the time the Econometric Society reported
1970 regular membership as 3150, which is above even the current total. However, the society at the time
apparently had accounting problems that resulted in a large number of people continuing to remain members
and receive the journal despite not having paid dues. The figure reported in the table for 1970 is an estimate
obtained by adjusting the reported 1970 figure by the percentage drop in membership which occurred later
when those who had not paid dues were dropped from the membership lists.
^■^I do not know of good estimates of price elasticities for journals, but the fact that subscriptions were so
high in 1970 suggests that it could be large enough to allow most of the 13 percent increase in membership
since 1990 to be attributed to the 20 percent cut in the real price over the period.
44
U.S. schools hiring foreign economists, not from the rise of foreign schools. In 1970 27.5
percent of the articles in the top five jom-nals were by authors working outside the U.S.^"^
In 1999 the figure was only 23.9 percent.''^
Table 11: Growth in the number of economists in professional societies
Year
1950
1960
1970 1980
1990
1998
AEA total membership
ES regular membership
6936
10847
1399
18908 19401
1955 1978
21578
2571
20874
2900
The first row of the total membership of the American Economic Association in selected
years. The second row reports the number of regular members of the Econometric Society
at midyear.
Finally, a third place where it seemed natural to look for evidence of the growth of the
profession is in the number of submissions to top journals. Figure 5 graphs the annual
number of new submissions to the AER, Econometnca, JPE and QJE. Generally the data
indicate that there has been a small and nonsteady increase in submissions. AER sub-,
missions dropped between 1970 and 1980, grew substantially between 1980 and 1985, and
have been fairly flat since (which is when the observed slowdown occurs). JPE" submissions
peaked in the early 1970's and have been remarkably constant since 1973.'^ Econometrica
submissions grew substantially between the early 1970's and mid 1980's, and have gener-
ally declined since. Qi^" submissions increased at some point between the mid 1970's and
early 1990's and have continued to increase in recent years. Overall, the submissions data
indicate fairly clearly that there has not been a dramatic increase in submissions.
In both Table 11 and Figure 5 I have included some data from before 1970. The
clarity of the evidence of the growth of the profession between 1950 and 1970 provides
a striking contrast. American Economic Association membership grew by more than 50
percent in both the 1950's and the 1960's (and also more than doubled in the 1940's). The
Econometric Society also appears to have grown subtantially in the 1960's. Submissions to
the AER grew from 197 in 1950 to 276 in 1960 and 879 in 1970. I take this data to suggest
^""Each author of a jointly authored paper was given fractional credit in computing this figure, with credit
for an author's contributions also being divided if he or she lists multiple affiliations (other than the NBER
and similar organizations).
'^The percentage of articles by non-U. S. based authors dropped from 60% to 41% at REStud and from
34% to 28% at Econometrica. There was httle change at the AER, JPE and QJE.
'''The source of the early 1970's data is not clearly labeled and there is some chance that the 1970-1972
peak is due to resubmissions being grouped in with new submissions in those years.
45
Annual Submissions: 1960-1999
c
o
CO
w
E
Z3
0)
E
1000
800
600
400
200
1960 1965 1970 1975
— - — ■ Econometrica
American Economic Review
1980
Year
1985 1990 1995 2000
Journal of Political Economy
•Quarterly Journal of Economics
Figure 5: Submissions to top journals
The figure graphs the number of new papers submitted to the AER, Econometrica,
JPE and QJE in various years since 1960.
46
that for all their problems, the simple measures above ought to have some power to pick
up a large growth in the profession. They also suggest that the common impression that
the profession is much larger now than in 1970 may reflect a mistaken recollection of when
the earlier growth occurred.
6.3 Growth and submit-accept times
In this section I will discuss in turn each of the three arguments mentioned for why the
growth of the profession might lead to a slowdown of the publication process.
6.3.1 Editor workloads
First, I noted that editors who are busier may be less likely to give clear instructions about
what they'd like to see in a revision and may more often ask referees to review revisions. I
see this potential explanation as hard to support, because it is hard to find the exogenous
increase in workloads on which it is based.
Editors' workloads have two main components: spending a small amount of time on
a large number of submissions that are rejected and spending a large amount of time on
the small number of papers that are accepted. To obtain a measure of the first component
one would want to adjust submission figures to reflect changes in the difficulty of reading
papers and in the number of editors at a journal. Overall submissions have not increased
much. Articles are 50 to 100 percent longer now than in 1970. The fraction of submissions
that are notes or comments must also have declined. At the same time, however, there
have been substantial increases in the number of editors who divide the workload at most
journals: the AER went from one editor to four in 1984; Econometrica went from 3 to 4
in 1975 and from 4 to 5 in 1998; the JPE went from 2 to 4 in the mid 1970's and from 4
to 5 in 1999; REStud went from 2 to 3 in 1994. Hence, I wouldn't think that the rejection
part of editors' workloads should have increased much. The other component should have
been reduced because journals are not publishing more papers (see Table 12) and there are
more editors dividing the work. I could believe that this part of an editor's job has not
become less time-consuming because editors are trying to guide more extensive revisions,
but would regard this as switching from increased workloads to changes in norms as the
basis for the explanation.
47
6.3.2 Familiarity with submissions
To examine the idea that in the old days editors were able to review papers more quickly
because they were more likely to have seen papers before they were submitted I included in
the 1990's submit-accept times regression of Table 7 a dummy variable JoumalHQ indicat-
ing whether any of a paper's authors were affiliated with the journal's home institution/''
The regression yields no evidence that editors are able to handle papers they have seen
before more quickly.'^ There could be confounding effects in either direction, e.g. editors
ma.y feel pressure to subject colleagues' papers to a full review process, they may ask col-
leagues to make fewer changes than they would ask of others, they may give colleagues
extra chances to revise papers that would otherwise be rejected, etc., but I feel the lack
cf an effect is still fairly good evidence that the review process is not greatly affected by
whether editors have seen papers in advance. ^^
6.3.3 Competition
The final potential explanation mentioned above is that the review process may have length-
ened because journals have raised quality standards in response to the increased competition
among authors for space in the journals. This explanation is naturally addressed by ask-
ing whether competition has increased and whether increases in competition would lead to
longer review times.
7Vs mentioned above, the evidence from society membership and journal submissions
suggests that the relevant population of economists has increased only moderately. ^°
A second relevant factor is how the number of articles published has changed. Articles
are now longer than they used to be. While some journals have responded to this by
increasing their annual page totals, others have tended to keep their page totals fixed and
reduced the number of articles they print. Table 12 illustrates this by reporting the average
^'In constucting this variable I regarded the QJE as having both Harvard and MIT as home institutions
and the JPE as having Chicago as its home. Other journals were treated as having no home institution,
because it is my impression that editors at the other journals generally do not handle papers written by
their colleagues.
'^Laband et al (1990) report that papers by Harvard authors had shorter submit-accept times at REStat
in 1976 - 1980.
Laband and Piette (1994a) report that papers by authors who share a school connection with a journal
are more widely cited and interpret this as evidence that editors are not discriminating in favor of their
friends. Their school connection variable is much looser than those I've considered would include, for
example, any instance in which one author of a paper went to the same graduate school as any associate
editor of the journal.
^''An increased emphasis on journal publications rather than books might, however, mean that the number
of economists trying to write articles has grown more.
48
number of full length articles published in each journal in each decade. ^^ At Econometrica
and the JPE there has been a substantial decline in the number of articles published.
Comments and notes, which once constituted about one quarter of all publications, have
also almost disappeared at the JPE, QJE and REStud. As a result, one would expect that
a higher proportion of the submissions to these journals are also competing for the available
slots for articles.
Table 12: Number of full length articles per year in top journals
Journal
Number of articles p
er year
1970 - 1979
1980 - 1989
1990 - 1997
American Economic Review
53
50
55
Econometrica
74
69
46
Journal of Political Economy
71
58
48
Quarterly Journal of Economics
30
41
43
Review of Economics Studies
42
47
39
The table lists the average number of articles in various journals in different years. The
counts reflect an attempt to distinguish articles from notes, comments and other briefer
contributions.
A third relevant factor is how the incentives for authors to publish in the top journals
has changed. Since 1970 a tremendous number of new economics journals has appeared.
This includes the top field journals in most fields. One might imagine that the increase in
competition on the journal side might have forced top journals to lower their acceptance
threshold and may also have reduced the gap between authors' payoffs from publishing in
the top journals and their payoffs from publishing in the next best journals. Surprisingly,
the opposite appears to be true.
To explore changes in the relative status of journals, I used data from ISI's Journal
Citation Reports and from Laband and Piette (1994b) to compute the frequency with
which recent articles in each of the journals listed in Table 1 were cited in 1970, 1980, 1990
and 1998. Specifically, for 1980, 1990 and 1998 I calculated the impact of a typical article
^' There is no natural consistent way to define a full length article. In earlier decades it was common for
notes as short as three pages and comments to be interspersed with longer articles rather that being grouped
together at the end of an issue. Also, some of the papers that are now published in separate sections of
shorter papers are indistinguishable from articles. For the calculation reported in the table most papers
in Econometrica and REStud were classified by hand according to how they were labeled by the journals
and most papers in the other journals were classified using rules of thumb based on minimum page lengths
(which I varied slightly over time to reflect that comments and other short material have also increased in
length).
49
in journal i in year t by
CiteRatiOit =
T,l=t-9c{i,y,t)
h{i,t — 9,t)
where c(z, y, t) is the number of times papers that appeared in journal i in year y were
cited in year t and n(i,i — 9,t) is an estimate of the total number of papers published in
journal i between year i — 9 and year t.^^ The data that Laband and Piette (1994a) used
to calculate the 1970 measures are similar, but include only citations to papers published
in 1965-1969 (rather than 1961-1970).*^ Total citations have increased sharply over time
as the number of journals hcus increased and the typical article lists more references. To
compare the relative impact of top journals and other journals at different points in time, I
define a normalized variable, NCiteRation, by dividing CiteRatioa by the average of this
variable across the "top 5" general interest journals, i.e. AER, Econometnca, JPE, QJE
and REStud.^^
Table 13 reports the mean value of NCiteRatio for four groups of journals in 1970, 1980,
1990 and 1998. The first row reports the mean for next-to-top general interest journals for
which I collected data: Economic Journal, International Economic Review, and Review of
Economics and Statistics. The second row gives the mean for eight field journals, each of
which is (at least arguably) the most highly regarded in a major field. ^^ The third row
gives the mean for the other economics journals for which I collected data.^^ The table
clearly indicates that there has been a dramatic decline in the rate at which articles in the
second tier general interest journals and the top field journals are cited relative to the rate
at v/hich articles at the top general interest journals are cited. While one could worry that
some of the effect is due to my classification reflecting my current understanding of the
relative status of journals, the contrast between 1980 and 1998 is striking in the raw data
(see Table 20 of Appendix C.) In 1980 a number of the field journals, e.g. Bell, JET, JLE,
The citation data include all citations to shorter papers, comments, etc. The denominator is computed
by counting the number of papers that appeared in the journal in years t — 2 and i — 1 (again including
shorter papers, etc.) and multiplying the average by ten. When a journal was less than ten years old in year
t the numerator was inflated assuming that the journal would have received additional citations to papers
from the prepublication years (with the ratio of citations of early to late papers matching that of the AER.)
^■^In a few cases where Laband and Piette did not report 1970 citation data I substituted an alternate
measure reflecting how often papers published in 1968-1970 were being cited in 1977 (relative to similar
citations at the top general interest journals.)
^""The Laband and Piette (1994a) data only give relative citations and thus I can not compare absolute
citation numbers in 1970 and later years.
They are Journal of Development Economics, Journal of Econometrics, Journal of Economic Theory,
Journal of International Economics, Journal of Law and Economics, Journal of Public Economics, Journal
of Urban Economics, and the RAND Journal of Economics (formerly the Bell Journal of Economics) .
^This includes two general interest journals [Canadian Journal of Economics and Economic Inquiry) and
five field journals.
50
JMonetE, were about as widely cited as the top general interest journals. In 1998, the most
cited field journal has only half as many cites as the top journals. Looking at the "other
general interest journals" listed in Table 20 it is striking that the even the fourth highest in
a 1980 ranking {Economic Inquiry at 0.44) has an NCiteRatio well above the top journal
in the 1998 rankings (the EJ at 0.33).
Table 13: Changes in journal status: citations to recent articles relative to citations to top
five journals
Set of journals
Mean of NCiteRatio
for journals in group
1970 1980 1990 1998
Next to top general interest
Top field journals
Other journals
0.71 0.65 0.37 0.28
0.75 0.69 0.52 0.30
0.30 0.39 0.25 0.15
The table illustrates the relative frequency with which recent articles various groups of -
journals have been cited in different years. (Citations to the top five journals are normalized
to one.) The variable NCiteRatio and the sets of journals are described in the text. The
rav/ data from which the means were computed are presented in Table 20 of Appendix C.
The data above can not tell us whether the quality of papers in the top journals has
improved or whether instead the average quality of papers in the top field journals has
declined as more journals divide the pool of available papers. They also can not tell us
whether the top journals are now able to attract much more attention to the papers they
publish (in which case authors would have a strong incentive to compete for scarce slots) or
whether there is no journal-specific effect on citations and papers in the top journals are just
more widely cited because they are better (in which case authors receive no extra benefit
and would have no increased desire to publish in the top journals). Combining the citation
data with the slight growth in the profession and the slight dechne in the number of articles
top journals publish, however, my inference would be that there is now substantially more
competition for space in the top journals.
The second empirical question that thus becomes important is whether (and by how
much) an increase in the status of a journal leads to a lengthening of its review process.
I noted when presenting my very first table of submit-accept times (Table 1), that the
review process is clearly most drawn out at the top journals. In a cross-section regxession
of the mean submit-accept time of a journal in 1999 on its citation ratios (for 22 journals)
51
I estimate the relationship to be (with t-statistics in parentheses)
MeanLagigQ =^ 14.6 + 5.8NCiteRatioiQg.
(8.7) (1.8)
The coefficient of 5.8 on NCiteRatio indicates that as a group the top general interest
journals have review processes that are about 5.8 months longer than those at almost never
cited journals. The QJE is an outlier in this regression. If it is dropped the coefficient on
NCiteRatio increases to 11.1 and its t-statistic increases to 3.3.
The data on submit-accept times and citations for various journals also allow me to
examine how submit-accept times for each journal have changed over time as the journal
moves up or down in the journal hierarchy. Table 14 presents estimates of the regression
NCiteRatiOit — NCiteRatiOit^At r-.
MeanLag,,t - MeanLagit-Ai = "o ,r^., „ .■ \- aiDuml 080 1 At
JSCiteRatiOit-At
+a2Dnm8090tAt + asDumQOQSfAt + ^it,
where i indexes journals and the changes at each journal over each decade are treated as
independent observations.^^ In the full sample, I find no relationship between changes in
review times and changes in journal citations. The 1990-1998 observation for the QJE
is a large outlier in this regression. One could also worry that it is contaminated by an
endogeneity bias — one reason why the QJE may have moved to the top of the citation
ranking is that its fast turnaround times may have allowed it to attract better papers. ^^
When I reestimate the difference specification dropping this observation (in the second
column of the table), the coefficient estimate on the fraction change in the normalized
citation ratio increases to 5.3 and the estimate becomes significant.^^ The data on within-
journal differences can thus also support a link between increases in a journal's status and
its review process lengthening. Hence, for the second time in this paper (the other being
page lengths) I have identified both a change in the profession and a link between this
change and slowing review times.
How much of the slowdown over the last 30 years can be attributed to increases in
competition for space in the top journals? The answer depends both on which regression
estimate one uses and on what one assumes about overall quality/status changes. On the
^^Where the 1990 data are missing I use the 1980 to 1998 change as an observation.
^*In part because the data are not well known I do not think it is likely that the reverse relationship is
generally very important.
®^The high R^'s in both regressions reflect that the contributions of the dummies being included in the
R^ . If these are not counted the R^ of the second regression is 0.15.
52
Table 14: Effect of journal prestige on siibmit-accept times
Independent
Dep.
var.: AMeanLagu
Sample:
Variables
Full
No QJE 98
ANCUeHatiOa
NCiteRatio^t-i\t
0.8
5.3
(0.4)
(2.4)
Dum7080u
5.7
6.6
(3.1)
(4.0)
Dum8090^t
5.5
6.4
(5.0)
(6.4)
Dum9098it
2.1
4.1
(1.9)
(3.7)
Number of Obs.
44
43
i?2
0.55
0.65
The table reports regressions of changes in mean submit-accept times (usually over a ten
year interval) on the fraction change in NCiteRatio over the same time period. The data
include observations on 23 journals for a subset of 1970, 1980, 1990 and 1999 (or nearby
years). T-statistics are in parentheses.
low side, if one believes that most of the change in relative citations is just an accurate
reflection of the dilution of the set of papers available to the next tier journals, or if one
uses the estimate from the full sample difference regression, the answer would be about
none. On the high side, one might argue that there is just as much competition to publish
in, say, REStat today as there was in 1970. REStat has slowed down by about 10 months
since 1970, while the slowdown at the non-QJE top journals averages 14 months. This
comparison would indicate that four months of the slowdown in the non-QJE top journals
is due to the higher standards that the top journals now impose. This answer is also what
one gets if one uses the coefficient estimate from the difference regression that omits the
1990-1998 change in the QJE and assumes that REStafs status has been constant. My
view would be that it is probably easier to publish in REStat (or JET) now than it once
was and that increases in competition probably account for two or three months of the
slowdown at the top journals.
7 Changes in social norms
I use the term social norm to refer to the idea that the structure of the publication process is
determined by editors' and referees' understandings of what is "supposed" to be done with
53
submissions. Social norms may reflect economists' preferences about procedures and/or
what they hke to see in pubHshed papers, but, as in the case of fashions, they may also
have little connection with any fundamental preferences. In the publication case, it seems
perfectly plausible to me to imagine that in a parallel universe another community of
economists with identical preferences could have adopted the norm of just publishing papers
in the form in which they are submitted, figuring that any defects in workmanship will reflect
on the author.
The general idea that otherwise inexplicable changes in the review process are due
to a shift in social norms is inherently unfalsifiable. It is also indistinguishable from the
hj'pothesis that shifts in unobserved variables have caused the slowdown. One can, however,
examine whether particular explanations for why social norms might change are supported
by the data. In this section I examine the explanation proposed in Ellison (2000) for why
social norms might tend to shift over time in the direction of placing an increasing emphasis
on revisions.
7.1 The potential explanation
The challenge in constructing a model of the evolution of norms is to envision an environ-
ment in which it is plausible that norms would slowly but continually evolve over the course
of decades. On the most abstract level, the idea behind the model of Ellison (2000) is that
such a dynamic is natural in a perturbation of a model with a continuum of equilibria. In
the case of journals, a continuum of equilibria can result when an arbitrary convention for
weighting multiple dimensions of quality must be adopted. The more specific argument
for why social norms may come to place more emphasis on revisions is that a shift may
be driven by economists' struggles to understand why their papers are being evaluated so
harshly by the same "top" journals that regularly publish a large number of lousy papers.
The mechanics of the model are that papers are assumed to vary in two quality dimen-
sions, q and r. I generally think of q as reflecting the clarity and importance of the main
contribution of a paper and r as reflecting other quality dimensions, e.g. completeness,
exposition, extensions, etc., that are more often the focus of revisions. Alternately, one can
also think of g as reflecting the author's contributions and r as reflecting the referees'. The
timing of the model is that in each period authors first allocate some fraction of their time
to developing a paper's t/-quality, referees then assess q and report the level of r the paper
would have to achieve to be publishable, authors then devote additional time to improving
the r of their paper, and finally the editor fills the journal by accepting the papers that are
54
best in the prevailing social norm. Under the social norm, (a, 2), papers are regarded as
acceptable if and only if ag + (1 — a)r > z.
Because the acceptance set has a downward-sloping fronteir in (g, r) space, authors of
papers that turn out to have a very high q need only spend a little time adding r-quality
to ensure that their papers will be published. Authors of papers with intermediate q,
however, will spend all of their remaining time improving r, but will still fall short with
some probability. At the end of each period, each economist revises his or her understanding
of the social norm given observations about the level of r he or she was told was necessary
on his or her own submissions and given observations of the (g, r) of pubhshed papers.
Author /referees will have to reconcile conflicting evidence whenever the community of
referees tries to hold authors to an impossibly high standard, i.e. one that would not allow
the editor to fill the journal. In this case, authors will feel that the requests referees are
making of them are demanding (as they expected), and will be suprised to see a set of
papers that fall short of their understanding of the standard being accepted. The distribu-
tion of paper qualities that is generated when q is determined initially and later attempts
at m.arginal improvements focus on r is such that the unexpectedly accepted papers will
have relatively low g's and moderate to high r's in the distribution of resubmitted papers.
Economists rationalize the acceptance of these papers by concluding that overall quality
standards must be lower than they had thought and that r must be relatively more impor-
tant then they had thought. Any force that leads referees to always try to hold authors to
a level of overall quality level that is slightly too high to be feasible will lead to a slight
continual drift in the direction of emphasizing r. In Ellison (2000) this is done by assuming
that the continuum of correct beliefs equilibria are destabilized by a cognitive bias that
makes a-uthors think that their work is slightly better than others perceive it to be.
What evidence might one look for in the data to help evaluate this suggestion for
why social norms may tend to drift? First, given that the model views social norms as a
somewhat arbitrary standard evolving within a community of author/referees, one might
expect in such a model to see social norms evolving differently in different isolated groups.
Second, what the model predicts is that norms should evolve slowly from whatever standard
the population believes to hold at a point in time. As a result, the model predicts that
review times will display hysteresis. For example, a transitory shock like the temporary
appointment of an editor who has a personal preference for recjuiring extensive revisions
would have a permanent impact on standards even after the editor has been replaced.
Finally, the model views the slowdown as a shift over time in the acceptance frontier in
55
{q, r)-space. One would thus want to see both that there is a downward sloping acceptance
frontier and that the slope of the frontier has shifted over time to place more emphasis on
r.
7.2 Evidence
In this subsection I will discuss some evidence relevant to the first and third predictions
mentioned above.
7.2.1 Are norms field-specific?
Social norms develop within an interacting community. Because economists typically only
referee papers in their field and receive referee reports written by others in their field,
the evolutionary view suggests that somewhat different norms may develop in different
fields. (Differences will be limited by economists' attempts to learn about norms from
their colleagues and by the prevalence of economists working in multiple fields.) Trivedi
(1993) notes that econometrics papers published in Econometrica between 1986 and 1990
had longer review times than other papers. Table 15 provides a much broader cross-field
comparison. It lists the mean submit-accept times for papers in various fields published in
top five journals in the 1990's. The data indicate that economists in different fields have
very different experiences with the publication process (and these differences are jointly
highly significant). There is, however, limited overlap in what is published across journals,
and in our standard regression with journal fixed effects and journal specific-trends, the
differences across fields are not jointly significant. ^° It is thus hard to say from just the
data on general interest journals whether different fields have developed different norms or
if it is just that different journals have different practices.
Comparing Table 15 with Table 1 it is striking that the fields with the longest review
times at general interest journals seem also to have long review times at their field journals.
For example, the slowest field journal listed in Table 1 is the Journal of Econometrics.
There are eleven fields listed in Table 15 for which I also have data on a top field journal.
For these fields, the review times in the two tables is 0.81. I take this as clear evidence that
there are field-specific differences in author's experiences. This data can not, however, tell
us whether the differences in review times are due to inherent differences in the complexity,
etc. of papers in the fields or whether they just reflect arbitrary norms that have developed
^°Hence, Trivedi's finding does not carry over to this larger set of fields and journals. The p-value for a
joint test of equality is 0.12.
56
Table 15: Total review time by field in the 1990's
#of
Mean
#of
Mean
Field
papers
s-a time
Field
papers
s-a time
Econometrics
148
25.7
Macroeconomics
282
20.4
Development
24
24.7
International
69
19.3
Industrial org.
108
23.2
Political econ.
30
18.7
Theory
356
22.9
Pubhc finance
60
17.9
Experimental
35
22.5
Productivity
15
16.2
Finance
117
21.6
Environmental
10
15.5
Labor
105
20.8
Law and econ.
13
14.5
History
12
20.6
Urban
13
14.4
The table lists the mean submit-accept time (or submit-final resubmit for Econometrica)
in months for papers in each of sixteen fields published in top five journals in the 1990's,
along with the number of papers in the field for which the data were available.
differently in diiferent fields.
One way in which I thought field-specific norms might be separated from journal and
complexity effects was by looking at finer field breakdowns. Table 16 provides a similar
look at mean submit-accept times for the ten subfieids into which I divided microeconomic
theory. My hope was that such breakdowns could make field-specific differences in com-
plexity less of a worry, increase the number of fields that could be compared within each
journal, and lessen the the problem of JPi? theory papers being inappropriately compared
with very different Econometrica theoiy papers. The differences between theory subfieids
indeed turn out to be very large, and they are also statistically significant at the one percent
level in regression like our standard regression but with more field dummies. ^^ I take this
as suggestive that there are field-specific publishing norms within microeconomic theory.
7.2.2 TradeoiTs between q and r
As described above, Ellison (2000) suggests that the slowdown of the economics review
process can be thought of as part of a broader shift in the weights that are attached to
different aspects of paper quality. The models' framework is built around an assumption
that referees and editors make tradeoffs between different aspects of quality — papers with
more important main ideas (high g-quality) will be held to a lower standard on dimensions of
exposition, completeness, etc. (r-quality). It predicts that over time norms will increasingly
^''Iti fact, the full set of thirty-one dummies for the fields listed in Table 17 is jointly significant at the one
percent level.
57
Table 16: Total review time for theory subfields in the 1990's
#of
Mean
#of
Mean
Field
papers
s-a time
Field
papers
s-a time
General equil.
34
27.9
Learning
13
21.6
Game theory
82
26.3
Contract theory
59
21.2
Unclassified
32
22.3
Auction theory
13
19.3
Decision theory
28
22.1
Social choice
19
19.0
Price theory
59
21.9
Welfare economics
17
16.9
The table lists the mean submit-accept time (or submit-final resubmit for Econometrica)
for papers in each of ten subfields of microeconomic theory published in top five journals in
the 1990's, along with the number of papers in the field for which the data were available.
emphasize r-quality.
To assess the assumption and the conclusion we would want to look for two things;
evidence that journals do make a q-r tradeoff and evidence that the way in which the q-r
tradeoff is made has shifted over time. The idea of this section is that review times may
be indicative of how much effort on r-quality is required of authors and that two available
variables that may proxy for (7-quality are whether a paper is near the front or back of
a journal issue and how often it has been cited. Q-r tradeoffs can then be examined by
including two additional variables in the submit-accept time regression. Order is the order
in which an article appears in its issue in the journal, e.g. one indicates that a paper was
the lead article, two the second article, etc. Log{l + Cites) is the natural logarithm of one
plus the total number of times the article has been cited. ^^ Summary statistics for these
variables can be found in Table 6. Note that a consequence of the growth in the number
of economics journals is that the mean and standard deviation of Log{l + Cites) are not
much lower for papers published in the 1990's than they are for papers published in the
earlier decades.
The regression results provide fairly strong support for the idea that journals make a
q-r tradeoff. In all three decades papers that are earlier in a journal issue spent less time in
the review process. In all three decades papers that have gone on to be more widely cited
spent less time in the review process. ^'^ Several of the estimates are highly significant.
The regressions does not, however, provide evidence to support the idea that there has
^^The citation data were obtained from the online version of the Social Science Citation Index in late
February 2000.
^^Laband et al (1990) had found very weak evidence of a negative relationship between citations and the
length of the review process in their study of papers published in REStat between 1976 and 1980.
58
been a shift over time to increasingly emphasize r. Comparisons of the regression coefficients
across decades can be problematic because the quality of the variables as proxies for q may
be changing.^'' The general pattern, however, is the coefficients on Order and Log{l + Cites)
are getting larger over time. (The increase is not so sharp if one thinks of the magnitudes
of the effects relative to the mean review time.) This is not what would be expected if
g-quality were becoming less important.
8 Conclusion
Many other academic fields have experienced trends similar to those in economics. The
process of publishing has become more drawn out and the published papers are observably
different (Ellison 2000). Robert Lucas (1988) has said of economic growth that "Once one
begins to appreciate the importance of long-run growth to macroeconomic performance it
is hard to think about anything else." While 1 would not go so far as to advocate devoting
a comparable share of journal space to the study of journal review processes, one could
argue from the fact that review processes have changed so much and that they have a large
impact not only on the amount of progress that is made by economists studying economic
growth but also on the productivity of all other social and natural scientists that they are
a much more important topic for research.
In trying to understand why the economics publishing process has become more drawn
out, I've noted that there are many seemingly plausible ways in which changes in the review
process could result from changes in the economics profession. I find some evidence for a
few effects. Papers are getting longer and longer papers take longer to review. This may
account for one or two months of the slowdown. The top journals appear to have become
more presitigious relative to the next tier of journals. Their ability to demand more of
authors may account for another three months.
My greatest reaction to the data, however, is that I don't see that there are many
fundamental differences between the economics profession now and the economics profession
in 1970. The profession doesn't appear to be much larger. It doesn't appear to be much
more democratic. I can't find the increasing specialization that I would have expected if
economic research were really much harder and more complex than it was thirty years ago.
I have also found evidence for very few of the potential explanations for why changes
in the profession would have slowed review times if such changes had occurred. I am led
®*For example, I know that the relationship between the order in which an article appears and how widely
cited it becomes has strengthened over time. This suggests that Order may now be a better proxy for q.
59
to conclude that perhaps there is no reason why economics papers must now be revised
so extensively prior to publication. The changes could instead reflect a shift in arbitrary
social norms that describe our understanding of what kinds of things journals are supposed
to ask authors to do and what published papers should look like.
I am sure that others will be able to think of alternate equilibrium explanations for the
slowdown that merit investigation. One possibility is that there are simply fewer important
ideas waiting to be discovered. Another is that increasingly long battles with referees may
be due to referees becoming more insecure or spiteful. Another is that economists today
may now have worse writing skills, e.g. being worse at focusing on and explaining a paper's
main contribution, perhaps due to trends in what is taught in high school and college.
Finally, there is another multiple equilibrium story: we may spend so much time revising
papers because authors (cognizant of the fact that they will have to revise papers later)
strategically send papers to journals before they are ready. I certainly believe that such
strategic behavior is widespread, but also believe that my data on the growth of revisions
understates the increase in polishing efforts. Looking back at published papers from the
1970's I definitely get the impression that even the first drafts of today's papers have been
rewritten more times, have more thorough introductions (with much more spin), have more
referenc3s, consider more extensions, etc.
What future work do I see as important? First, the social norms explanation I fall back
on is very incomplete. The crucial question it raises is why social norms have changed.
Further work to develop models of the evolution of socia.l norms would be useful.
The empirical approach of this paper to the general question of why standards for
publishing have changed follows what is the standard practice in industrial organization
these days. To understand a general phenomenon I've focused on one industry (economics)
where the phenomenon is observed, where data were available, and where I thought I
had or could gain enough industry-specific knowledge to know what factors are important
to consider. Economics, however, is just one data point of the many that are available.
Many fields have similar (though usually less severe) slowdowns and many others do not.
Different disciplines will also differ in many of the dimensions studied here, e.g. in their
rates of growth. An inter-disciplinary study would thus have the potential to provide a
great deal of insight.
Studies that look in more depth at the changes in economics publishing would also
be valuable. For example, I would be very interested to see a descriptive study of how
the contents of referees' reports and editors' letters have changed over time. To better
60
understand the causes of multi-round reviews it would also be very useful to see whether a
blind observer (be they an experienced editor, a graduate student or a writing expert) can
predict how long papers ended up taking to get accepted from examining the first drafts,
and if so what characteristics of papers they use.
The suggestion that the review process at economics journals might not be optimal
should not be surprising to economists. While there are lots of implicit incentives and
some nominal fees or payments, almost everything about the process is unpriced. Most
readers are not paying directly in a way that makes journal prices reflect readers' demand
for the articles; authors are not paid for their papers nor can they negotiate with journals
and reach agreements with payments going one way or the other in exchange for making
or not making revisions or to change publication decisions; referees are not paid anything
approaching their time cost and do not negotiate fees commensurate with the quality of
their contributions to a particular paper; etc.
The idea that the nature of the journal review process is largely determined by arbitrary
social norms can ironically be thought of as an optimistic world view. It suggests that the
review process could be changed dramatically if economists simply all decided that papers
should be assessed differently. Newspapers and popular magazines pubhsh articles and
columns about economics a fevvf days after they are v;ritten. Given the tremendous range
between this and the current review process in economics (or an even more drawn out
process if desired), it would seem valuable to have a discussion in the profession about
whether the current system captures economists' joint preferences.
Further research into the effects of the review process (as suggested in the JPEs 1990
editors' report) could enlighten this discussion. ^^ A simple project suggested to me by Ilya
Segal would be to collect for a random sample of papers the version initially submitted to
a journal and the first and second revisions and blindly allocate them to three graduate
students. Seeing independent ratings of the drafts could teach us alot about the value-added
of the process.
Finally, although not directly related to the slowdown, the paper suggests other avenues
for research into the economics profession. The observations about the profession I find most
striking are that economists do not seem to be becoming more specialized and that on a
relative citations basis the top journals are becoming more dominant. To see whether power
is becoming increasing concentrated in the hands of the top journals it would be interesting
^'''The one piece of research I'm aware of on the topic is a Laband's (1990) study of citations for 75 papers
pubhshed in various journals in the late 1970s. It found that papers for which the ratio of the time authors
spent on revisions to the length of the comments they receive was larger were more widely cited.
61
to update Sauer's (1988) study of the relative value (in salary terms) of publications in
various journals and also look for changes in what journals economists must publish in to
obtain and keep a position. ^^ It may also be interesting for theorists to think about whether
the increased status of the top journals may be a natural consequence of the proliferation of
journals (or some other trend). With the recent growth (and potential future explosion) of
internet-based paper distribution, this may help us predict whether journals will continue
to direct the attention of the profession or whether great changes are in store.
^^Sauer (1988) found that a publication in the 10th best journal was worth about 60 percent of a publi-
cation in the top journal and that a publication in the 80th best journal was worth about 20 percent of a
publication in the top journal.
62
Appendix A
The idea of the field-to-field distance measure is to regard fields as close together if
authors who write in one field also tend to write in the other. In particular, for pairs of
fields / and g I first define a correlation-like measure by
.f , P{f,g)-P{f)P{9)
VP(/)(1-P(/))F(5)(1-P(5))'
where P{f, g) is the fraction of pairs of papers by the same author that consist of one paper
from field / and one paper from field g (counting pairs with both papers in the same field
as two such observations), and P{f) is the fraction of papers in this set of pairs that are in
field /. I then construct a distance measure, d{f, g), which is normalized so that d{f, f) - 0
and so that d{f, (?) = 1 when writing a paper in field / neither increases nor decreases the
likelihood that an author will write a paper in field g by
rf(/,g) = l
C{f,g) 97
Vc{f,f)c{g,9)
I classified papers as belonging to one of thirty one fields (again using JEL codes and
other rules). The field breakdown is the same as in the base regression except that I
ha-ve divided macroeconomics into three parts, international, finance, and econometrics
into two parts each, and theory into ten parts. See Table 17 for the complete list. To get
as much information as possible about the relationships between fields and about editors
with few publications in the top five journals, the distance matrix and editor profiles were
computed on a dataset which also included notes, shorter papers, and papers in three other
general interest journals for which I collected data: the AER^s Papers and Proceedings issue,
Brookings Papers on Economic Activity, and REStat. Papers that were obviously comments
or replies to comments were dropped. All years from 1969 on were pooled together.
To illustrate the functioning of the distance measure. Table 17 lists for each of the
thirty-one fields up to three other fields that are closest to it. I include fewer than three
nearby fields when there are fewer than three fields at a distance of less than 0.99.
To illustrate how the editor imputation is working in different areas of economics, I
report in Table 18 the Editor Distance variable and the identity of the imputed editor for
all obervations in the regression described in Table 10 belonging to the four economists
having the largest number of articles in the 1990's in the top five journals among those
working primarily in microeconomic theory, macroeconomics, econometrics and empirical
microeconomics; Jean Tirole, Ricardo Caballero, Donald Andrews and Alan Krueger.^^
Editors names are in plain text if the editor's identity was known. Bold text indicates that
it was imputed correctly. Italics indicate that it was imputed incorrectly.
^^The assumption that within-field distances are zero for all fields ignores the possibility that some fields
are broader or more specialized than others. I experimented with using meaisures of specialization based on
JEL codes like those in the previous subsection to make the within-field distances difl'erent, but cross-field
comparisons like this are made difficult by the differences in the fineness and reasonableness of the JEL
breakdowns, and I found the resulting measure less appealing than setting all within-field distances to zero.
^^Note (especially with reference to Krueger) that this is not the same as the economists who contribute
the most observations to my regression from these fields given that I lack data on submit-accept times from
1990-1992 at the JPE and AER and for 1991-1992 at the QJE.
63
Table 17: Closest fields in the field- to-field distance measure
Field
Three closest fields
Micro theory — unclassified
Industrial org.
Micro - WE
Micro - GE
Micro theory — price theory
Micro - U
Micro - WE
Micro - DT
Micro theory — general eq.
Micro - U
Micro - WE
Micro - GT
Micro theory — welfare econ.
Micro - U
Public Finance
Micro - GE
Micro theory — game theory
Micro - L
Micro - CT
Micro - SC
Micro theory — social choice
Political economy
Experimental
Micro - WE
Micro theory — contract th.
Micro - L
Micro - GT
Micro - U
Micro theory — auctions
Experimental
Micro - CT
Industrial org.
Micro theory — decision th.
Micro - PT
Micro - U
Micro - GT
Micro theory — learning
Micro - GT
Micro - CT
Finance - U
Macro — unclassified
Finance - U
International - IF
Macro - G
Macro — growth
Productivity
Development
Macro - T
Macro — transition
Finance - C
Law and economics
Development
Econometrics — unclassified
Econometrics - TS
Econometrics — time series
Econometrics - U
Industrial organization
Micro - U
Micro - CT
Micro - A
Labor
Urban
Public finance
International — unclassified
International - IF
Development
Macro - G •
International — int'l finance
International - U
Macro - T
Macro - U
Public finance
Micro - WE
Urban
Environmental
Finance — unclassified
Micro - L
Macro - U
Finance - C
Finance — corporate
Micro - U
Macro - T
Micro - CT
Development
Macro - T
International - IF
International - U
Urban
Labor
Law and economics
Pubhc Finance
History
Productivity
Development
Other
Experimental
Micro - A
Micro - SC
Micro - GT
Productivity
Macro - G
Industrial org.
History
Political economy
Micro - SC
Law and economics
Macro - T
Environmental
Public finance
Development
Micro - WE
Law and economics
Political economy
Urban
Macro - T
Other
History
Political economy
Urban
The table reports for each field in the 31-field breakdown the three other fields that are
closest to it. The distance measure is derived from an examination of the publication records
of authors with at least two publications in seven general interest journals since 1969 as
described in the text. A dash indicates that fewer than three fields are at a distance of less
than 0.99 from the field in the first column.
64
Table 18: Examples of editor assignments and distances
Journal
Title
Assumed
Editor
& Year
Editor
Distance
Papers by Jean Tirole
RES 90
Adverse Selection and Renegotiation in Procurement
Moore
0.56
EMA 90
Moral Hazard and Renegotiation in Agency Contracts
Kreps?
0.71
QJE 94
A Theory of Debt and Equity: Diversity of . . .
Shleifer
0.75
EMA 90
The Principal Agent Relationship with an . . .
Kreps
0.85
EMA 92
The Principal Agent Relationship with an . . .
Kreps
0.85
QJE 97
Financial Intermediation, Loanable Funds, and . . .
Blanchard
0.85
RES 96
A Theory of Collective Reputations (with . . .
Dewatripont
0.86
JPE 95
A Theory of Income and Dividend Smoothing . . .
Scheinkman
0.92
QJE 94
On the Management of Innovation
Shleifer
0.98
JPE 93
Market Liquidity and Performance Monitoring
Scheinkman
1.00
JPE 98
Private and Public Supply of Liquidity
Topel
1.03
JPE 97
Formal and Real Authority in Organizations
Rosen
1.06
Papers by Ricardo Caballero |
QJE 90
Expendature on Durable Goods: A Case for Slow . . .
Blanchard
0.23
QJE 93
Microeconomic Adjustment Hazards and Aggregate . . .
Blanchard
0.23
AER 94
The Cleansing Effect of Recessions
Campbell
0.49
EMA 91
Dynamic (S,s) Economies
Deaton
0.68
JPE 93
Durable Goods: An Explanation for their Slow . . .
Lucas
0.73
AER 87
Aggregate Employment Dynamics: Building from . . .
West
0.79
QJE 96
The Timing and Efficiency of Creative Destruction
Katz
0.97
RES 94
Irreversibihty and Aggregate Investment
Dewatripont
1.02
Papers by Alan Krueger |
AER 94
Minumum Wages and Employment: A Case Study . . .
AshenfeAter
0.34
QJE 93
How Computers Have Changed the Wage Structure:
Katz
0.37
QJE 95
Economic Growth and the Environment
Blanchard
0.96
AER 94
Estimates of the Economic Returns to Schooling . . .
Milgrom
1.04
Papers by Donald Andrews
EMA 94
The Large Sample Correspondence between Classical . . .
Robinson
0.13
EMA 97
A Conditional Kolmogorov Test
Robinson
0.13
EMA 97
A Stopping Rule for the Computation of Generalized . . .
Robinson
0.25
EMA 91
Asymptotic Normality of Series Estimators for . . .
Hansen
0.59
EMA 94
Optimal Tests When a Nuisance Parameter Is Present . .
Hansen
0.59
EMA 94
Asymptotics for Semiparametric Econometric Models . . .
Hansen
0,59
EMA 91
Heteroskedasticity and Autocorrelation Consistent . . .
Hansen
0.60
EMA 93
Tests for Parameter Instability and Structural . . .
Hansen
0.60
EMA 93
Exactly Median-Unbiased Estimation of First Order . . .
Hansen
0.60
RES 95
Nonlinear Econometric Models with Deterministically . . .
Jewitt
0.95
The table reports the imputed editor and the values of Editor Distance for papers in the
1990's dataset by four authors. The editor's name is in plain text if it was known. It is
bold it was imputed correctly and in italics if it was imputed incorrectly. Editor Distance
is a measure of how far the paper is from the editor's area of expertise. It is constructed
from data on cross-field authoring patterns as described in the text.
65
Appendix B
The set of seventeen main fields and the fraction of all articles in the top five journals
falling into each category are given in Table 19.
Table 19; Field breakdown of articles in top five journals
Field
Percent of papers
1970's
1980's
1990's
Microeconomic theory
26.3
29.5
22.7
Macroeconomics
17.5
15.5
21.3
Econometrics
9.5
9.1
8.7
Industrial organization
8.9
11.0
8.3
Labor
9.8
9.0
8.6
International
6.9
5.4
5.6
Public Finance
6.1
5.3
5.3
Finance
5.2
5.3
7.7
Development
3.8
1.4
1.6
Urban
2.2
0.7
1.1
History
1.1
1.9
1.0
Experimental
0.4
1.3
2.5
Productivity
1.4
1.2
0.9
Political economy
1.1
0.6
1.9
Environm.ental
0.4
0.4
0.8
Law and economics
0.3
0.3
1.0
Other
3.1
2.3
1.2
The table reports the fraction of articles in the top five journals in each decade that are
categorized as belonging to each of the above fields. Data for the 1990's includes data up
to the end of 1997 or mid-1998 depending on the joiurnal.
66
Appendix C
Table 20: Recent citation ratios: average of top five journals normalized to one
Journal
Value of NCiteRat
io
1970
1980
1990
1998
Top five general interest journals
American Economic Review
"1.01
1.02
0.73
0.64
Econometrica
0.86
0.95
1.71
1 00
Journal of Political Economy
0.81
1.69
1.11
1.23
Quarterly Journal of Economics
0.94
0.61
0.74
1.37
Review of Economic Studies
1.38
0.74
0.71
0.76
Other general interest journals
Canadian Journal of Economics
"=0.34
0.24
0.18
0.06
Economic Inquiry
0.26
0.44
0.29
0.15
Economic Journal
0.65
0.78
0.49
0.33
International Economic Review
0.53
0.53
0.26
0.20
Review of Economics and Statistics
0.95
0.65
0.36
0.29
Economics field j
ournals
Journal of Applied Econometrics
=0.32
0.26
Journal of Comparative Economics
'^'^0.38
0.24
0.16
Journal of Development Economics
^^0.28
0.30
0.16
Journal of Econometrics
'^^'^0.49
0.53
0.36
Journal of Economic Theory
'"•■0.78
0.69
0.40
0.21
Journal of Environmental Ec. & Man.
•^0.46
0.21
0.16
Journal of International Economics
0.35
0.38
0.26
Journal of Law and Economics
0.71
1.26
0.87
0.51
Journal of Mathematical Economics
=^^0.42
0.28
0.10
Journal of Monetary Economics
=0.87
0.81
0.45
Journal of Public Economics
=0.56
0.34
0.19
Journal of Urban Economics
=0.61
0.28
0.24
RAND Journal of Economics
1.11
=0.78
0.31
Mean CiteRatio for "Top 5" journals
—
1.46
2.59
3.99
The table reports the measure NCiteRatio of the relative frequency with which recent
articles in each journal were cited in year t. The last row gives the mean of CiteRatio for
the first five journals listed. Notes: o - Value computed as a weighted average of values
reported in Laband and Piette for the regular and P&P issues, b - Value was not given by
Laband and Piette and data instead reflect 1977 citations to 1968-1970 articles, c - Journal
began publishing during period for which citations were tallied and values are adjusted in
accordance with the time-path of citations to the AER. d - Data are for 1982.
67
References
Cleary, F. R. and D. J. Edwards (1960): "The Origins of the Contributors to the AER
During the 'Fifties,'" American Economic Review 50, 1011-1014.
Coe, Robert K. and Irwin Weinstock (1967): "Editorial Pohcies of the Major Economics
Journals," Quarterly Review of Economics and Business 7, 37-43.
Colander, David (1989): "Research on the Economics Profession," Journal of Economic
Perspectives 3, 137-148.
Ellison, Glenn (2000): "Evolving Standards for Academic Publishing: A q-r Theory"
mimeo.
Ellison, Glenn and Edward L. Glaeser (1997): "Geographic Concentration in U.S. Manu-
facturing Industries; A Dartboard Approach," Journal of Political Economy 105, 889-927.
Cans, Joshua (ed.) (2000): Publishing Economics: An Analysis of the Academic Journal
Market in Economics. London: Edward Elgar.
Hamermesh, Daniel (1994): "Facts and Myths about Refereeing," Journal of Economic
Perspectives 8, 153-164.
Hudson, John (1996): "Trends in Multi-Authored Papers in Economics," Journal of Eco-
nomic Perspectives 10, 153-158.
Laband, David N. (1990): "Is There Value-Added from the Review Process in Economics,"
Quarterly Journal of Economics 105, 341-353.
Laband, David N., R. E. McCormick and M. T. Maloney (1990): "The Review Process in
Economics: Some Empirical Findings," Review of Economics and Statistics 72A, v - xvii.
Laband, David N. and Michael J. Piette (1994a): "Favoritism versus Search for Good Pa-
pers: Empirical Evidence Regarding the Behavior or Journal Editors," Journal of Political
Economy 102, 194-203.
Laband, David N. and Michael J. Piette (1994b): "The Relative Impacts of Economics
Journals: 1970 - 1990," Journal of Economic Literature 32, 640-666.
Laband, David N. and John M. Wells (1998): "The Scholarly Journal Literature of Eco-
nomics: A Historical Profile of the AER, JPE, and QJE,'' The American Economist 42,
47-58.
Lucas, Robert E., Jr. (1988): ''On the Mechanics of Economic Development," Journal of
Monetary Economics 22, 3-42.
Marshall, Howard D. (1959): "Publication Policies at the Economic Journals," ^mencan
Economic Review 49, 133-138.
Oster, Sharon and Daniel S. Hamermesh (1998): "Aging and Productivity Among Economists,"
68
Review of Economics and Statistics 80, 154-156.
Sauer, Raymond D. (1988): "Estimates of the Returns to Quality and Coauthorship in
Economic Acadeniia," Journal of Political Economy 96, 855-866.
Siegfried, John J. (1972): "The Pubhshing of Economics Papers and Its Impact on Graduate
Faculty Ratings, 1960-1969," Journal of Economic Literature 10, 31-49.
Siegfried, John J. (1994): "Trends in Institutional Affiliation of Authors Who Publish in
the Three Leading General Interest Journals," The Quarterly Review of Economics and
Finance 34, 375-386.
Siegfried, John J. (1998): "Who is a Member of the AEA?," Journal of Economic Perspec-
tives 12, 211-222.
Stern, Scott and Manuel Trajtenberg (1998): "Empirical Implications of Physician Author-
ity in Pharmaceutical Decisionmaking," National Bureau of Economic Research Working
Paper 6851.
Trivedi, P. K. (1993): "An Analysis of Publication Lags in Econometrics," Journal of
Applied Econometri,cs 8, 93-100.
Yohe, Gary W. (1980): "Current Publication Lags in Economics Journals," Joum.al of
Economic Literature 18, 1050-1055.
Yotopouios, P. A. (1961): "Insitutional Affiliation of the Contributors to Three Professional
Journals," American Economic Review 51, 665-670.
69
523
S (IR
Date Due
Lib-26-67
MIT LIBRARIES
3 9080 01917 6699
'•>Niyi?M